The approach of BerecoLabs’ forecasts uses algorithms of Artificial Intelligence (AI) to help us distinguish between massive amounts of data, those that are relevant to predict future weather events. And best of all... our forecast model learns actively from its errors, eventually improves its ability to forecast and becomes more reliable.

In this case we gave the historical records of the National Meteorological Service (NMS) for the city of Buenos Aires to our algorithm and asked it to predict monthly maximum, minimum and average temperatures for a period of 31 months. To evaluate its precision, we use two indicators: the Root Means Squared Error (RMSE), which gives an idea of the resolution capacity in the short term base of the model (RMSE = 0 being the ideal result) and on the other hand, the Index of Agreement (IoA), a dimensionless parameter which reflects the concordance between the observed and predicted values, and that is more suited to detect long term relationships (IA = 1 implies a perfect correlation, IdA = 0 no relation at all).

These are the results:

Figure 1. Predictions of monthly temperatures for the city of Buenos Aires.

The superabundance of data, even if they are of a relative low quality, allows to generate reliable information if combined properly. Dive into millions of records to find the appropriate relations is no task for a person, but for a computer, and computers are increasingly getting better at this. There are many Artificial Intelligence tools that allow us to perform these tasks efficiently, fast and cheap, you just have to know how to use them. In BerecoLabs we are generating new ways of using data to transform it into information that improves decisions and allows us to understand the reality.