A neural network improves forecasts for severe storm hazards

10-06-2021

The National Center for Atmospheric Research (NCAR) is using artificial intelligence to run experimental forecasts for hail, tornadoes, and intense winds—storm hazards that can cause serious damage but that are notoriously difficult for weather models to accurately predict.

The Neural Network Convective Hazard Forecasts, which provide the probability of hail, tornadoes, or winds for a particular location at a particular time, are updated twice daily and freely available online.

NCAR began running these research forecasts in the spring of 2020 as part of the Hazardous Weather Testbed Spring Experiment at the National Oceanic and Atmospheric Administration (NOAA). When the spring severe weather season wrapped up at the end of June, the research team analyzed the forecasts, comparing them to more traditional techniques for forecasting storm hazards. The scientists found that the neural network significantly improved the accuracy of forecasts based on traditional model output, especially in situations where those traditional forecasts tended to perform the worst, including in the eastern and western parts of the United States and for nighttime storms.

“We were able to show a significant improvement in most situations,” said NCAR scientist Ryan Sobash, who is leading the project. “Not only did the neural network more skillfully predict when and where severe storm hazards were likely, it was also able to better predict whether the hazardous event would be dominated by hail or by winds.”

The system is up and running again this spring with some tweaks. The project is being funded by NOAA and the National Science Foundation, which is NCAR’s sponsor. The forecasts are run on the Cheyenne supercomputer at the NCAR-Wyoming Supercomputing Center.

From one predictor to 40

In order for a weather model to spin up a thunderstorm, it has to be run at a high enough resolution to capture the fine-scale atmospheric phenomena—including updrafts and downdrafts—that drive the storm’s creation. This typically requires spacing of 4 kilometers (2.5 miles) or less between grid points inside the model. At that resolution, the model can begin to simulate the storm itself, but it is unable to produce many of the hazards associated with the storm, which happen at even smaller scales, including hailstones and tornadoes. Because of that, forecasters have relied on particular outputs in the model data, or proxies, to determine the likelihood that a severe storm will produce such hazards.

One of the most frequently used proxies is updraft helicity, a measure of a storm’s rotation. Storms with stronger rotation tend to be more severe and more capable of producing hail and tornadoes. This proxy has worked relatively well for supercells and other rotating storms, but it doesn’t capture some of the severe weather that can be produced in straight-line storms, such as derechos.

By contrast, the neural network used in NCAR’s new forecasts can ingest about 40 different factors, including updraft helicity but also the storm’s location, time, dew point, wind speeds, surface pressure, and much more. The neural network uses patterns in how those predictors relate to one another that it gleaned from its training data set—in this case nearly 500 past forecasts from NOAA’s High-Resolution Rapid Refresh model along with the accompanying reports of actual storms—to calculate the probability that a storm will produce hail, tornadoes, or strong winds. The forecasts produced by the neural networks show the likelihood of a storm hazard forming within either 40 kilometers (25 miles) or 120 kilometers (75 miles) of individual grid points in the model.

By taking into account factors beyond storm rotation, the neural network is better able to predict hazards associated with straight-line wind storms than forecasts based solely on updraft helicity, and it also improves on forecasts for regions where supercells are not as likely to form, especially areas outside the Midwest.

“The success of our neural network forecast suggests that machine learning could be a useful tool for operational forecasts,” Sobash said. “The forecasts are being run again this spring as part of the Hazardous Weather Testbed and we look forward to getting more feedback from the operational forecasters about how they might incorporate this kind of information into their existing forecasting process.”

 

Source: https://techxplore.com/news/2021-05-neural-network-severe-storm-hazards.html

Read next →

June 13, 2021

ClusterPower builds 200MW data centre in Romania

The 273,000 sq ft campus will scale to a power output of 200MW, once all five data centres being planned come on stream, said ClusterPower. The exact location of the […]

Read more →

June 13, 2021

Romanian entrepreneurs to develop “first hyperscale data center in the region”

ClusterPower promises to create a one-stop-shop Artificial Intelligence infrastructure, following an initial investment of RON 172 mln (EUR 36 mln), of which RON 82 mln was provided under a state […]

Read more →

June 13, 2021

ClusterPower, controlată de trei antreprenori români, va construi lângă Craiova un centru de date pe care îl prezintă unul dintre cele mai mari din Europa

ClusterPower va crea o infrastructură de Inteligență Artificială one-stop-shop, în urma unei investiții inițiale de 172 milioane lei (36 milioane euro), din care 82 milioane de lei este asigurată printr-un […]

Read more →