06/09/2023
by
Damien Raynaud
10 min
Weather forecasts are an essential source of information for planning the day-to-day activities of our businesses. But how are they generated? In this article, Damien Raynaud, meteorologist at Frogcast, reveals the secrets of weather forecasting.
null
Forecasting the evolution of weather conditions over the coming days requires the most accurate possible knowledge of the current state of the atmosphere. For decades, networks of weather stations have been developed to measure atmospheric parameters on the ground in real time. Over time, these stations have been supplemented by other observation networks providing direct measurements at the ocean surface (weather buoys, sensors on boats) and at altitude (radiosondes, sensors on aircraft).
But it's over the past 2 decades that atmospheric observation has reached a new stage, with the launch of an ever-increasing number of high-performance meteorological satellites. They enable us to observe the atmosphere in a spatialized way, and to estimate an ever-growing number of meteorological parameters on the ground, on the ocean surface and vertically in the atmosphere. They are complemented on the ground by other measuring instruments (radar and lidar), which also survey the atmosphere and provide precious information on clouds and precipitation.
All sensors and associated measurements are subject to continuous quality control, and must comply with standards defined by the World Meteorological Organization.
Did you know?
The covid crisis between 2020 and 2022, and the associated reduction in air traffic and therefore in the weather observations made by aircraft, has had an impact on the quality of weather forecasts.
The atmosphere is driven by the equations of physics and thermodynamics, with the Navier-Stokes equation at its heart (we'll spare you the details here 😭).
To simulate its evolution, we use mathematical models called Numerical Weather Prediction (NWP) models. In these models, the atmosphere is divided into cubes, within which the weather parameters are considered to be homogeneous (a single value for temperature, humidity, etc.). The horizontal and vertical dimensions of these cubes define the model's resolution. The smaller they are, the better the model's resolution. This division of the atmosphere is called the model grid. Predicting the evolution of weather with these models involves solving these physical equations at every point on the grid and at every time. The finer the grid, the better is the model's ability to simulate small-scale phenomena and to represent in detail the atmosphere and surface characteristics (e.g. topography). As we'll see a little later, this is not without consequences for calculation resources, and the higher is the resolution, the greater is the calculation power required to carry out the forecast.
Every major meteorological center develops its own forecasting models. Some are global, providing forecasts for every point on the globe, while others are regional, calculating only for a specific area. Spatio-temporal resolutions, forecasting timescales and update frequencies also differ from one model to another. Each new simulation, generally carried out every 3 to 6 hours, is called a run .
Did you know?
There are several dozen operational weather models in the world today. However, only less than 10 of these are global.
To carry out its calculations and predict the evolution of the weather, the model needs to be provided with a detailed description of the current state of the atmosphere at each of its grid points. This is where previously collected meteorological observations come into the equation. Using a technique known as data assimilation, an initial state of the atmosphere is generated by merging information from the model's previous run (e.g. the H+6 forecast from the run made 6 hours earlier) and observations made since then. The result is a complete 3D map of the atmosphere, taking advantage of the direct and indirect weather measurements available since the last forecast.
Now the calculations can start! At each grid point and time step, the model has to solve a set of complex equations, which are very costly in terms of computing resources. Calculations generally take a few tens of minutes, thanks to the use of supercomputers able to perform an impressive number of operations per second. For example, the latest supercomputer used by Météo-France, commissioned in 2021, has no fewer than 300,000 cores capable of performing over 20 million billion operations per second. The final output is a 4D forecast (latitude, longitude, altitude and weather) of atmospheric trends over the coming hours and days.
Did you know?
Seen from the outside, supercomputers look like a succession of cabinets containing calculation processors and stored in a room measuring several dozen square meters. The heat generated by these operations is such that hydraulic cooling circuits are required.
Over the last three decades, the available calculation capacity has been multiplied by more than ten million. In addition, networks of observations are becoming ever denser, enabling models to be fed with high-quality data. Our understanding of the physical processes involved in the atmosphere has also greatly evolved thanks to scientific research in this field. So why is it that the reliability of forecasts is still sometimes questioned?
In the early 1970s, Edward Lorenz developed an extremely simplified meteorological model to qualify the atmosphere's behavior. His experiment consisted in feeding this model with two extremely close initial states. He then ran his calculations and compared the meteorological trajectories derived from these starting points. The results showed that, after just a few iterations of the model, the two trajectories began to diverge significantly, before finally depicting two completely different states of the atmosphere. Lorentz had just proved the chaotic nature of the atmosphere and presented his results at a conference in 1972 with the now famous title: "Predictability: can the flapping of a butterfly's wings in Brazil cause a tornado in Texas?". The famous butterfly effect was born.
This study illustrated the extremely complex behavior of the atmosphere. Inaccuracies in the initial state provided to the model, even the smallest ones, or in the modeling of very small-scale meteorological processes such as turbulence, inevitably lead to errors that spread throughout the forecast and increase as the simulation runs. Fortunately, meteorologists have more than one string to their bow and have learned how to overcome sources of uncertainty. For example, they use what is known as ensemble forecasting, taking Lorentz's experiment to a larger scale: since it is impossible to perfectly describe the state of the atmosphere at every point, the meteorological model will not be run with a single initial state, but with a set of initial states, taking into account the uncertainty at points where no measurements are available. This results in a large number of meteorological scenarios, which are then used to evaluate the most accurate forecast for the coming days. These techniques, which are extremely costly in terms of calculation time, are now possible thanks to supercomputers.
Did you know?
With the continuous improvement of forecasting methods, a 5-day forecast today is as accurate as a 24-hour forecast in the 80s. Today, weather forecasts are considered to provide relevant information up to 8 to 10 days ahead.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Through a simple and efficient API, Frogcast promises to make it easy for you to integrate reliable weather forecasts! Join Frogcast now by connecting your application directly to the API!