Skip to content

The repository documents code for "Validating Deep-Learning Weather Forecast Models on Recent High-Impact Extreme Events " https://doi.org/10.1175/AIES-D-24-0033.1

License

Notifications You must be signed in to change notification settings

jonathanwider/DLWP-eval-extremes

Repository files navigation

DLWP-eval-extremes

The repository documents the code for "Validating Deep-Learning Weather Forecast Models on Recent High-Impact Extreme Events" by Olivier C. Pasche, Jonathan Wider, Zhongwei Zhang, Jakob Zscheischler, and Sebastian Engelke (DOI: 10.1175/AIES-D-24-0033.1). We focus on the analyses conducted for our case studies. Details about how to run the AI models are referred to the GitHub pages of the respective modeling groups; see the "Prediction models" section.

Setup

We provide two environment files: eval_env.yml is the file we used to create the environment, and eval_log.yml was created with conda env export to provide the exact version numbers we used.

This should enable recreating our environment through conda env create -f <environment-name>.yml.

Data

We release preprocessed ground truth and prediction weather data for the three case studies considered in the paper as a zenodo dataset. The dataset is created using several sources:

  • for the ground truth data sets, we use data released through WeatherBench 2 [paper] [dataset documentation] when possible. In particular, we use their ERA5 climatology in the Pacific Northwest heatwave case study.
  • when the ground truth data is not available through WeatherBench 2, we download it from ECMWF.
  • We ran all ML weather forecasting models ourselves, using ERA5 to initialize the models.
  • HRES forecasts were retrieved from the ECMWF operational archive and the TIGGE data retrieval portal.

For details, including the license statements of the utilized ECMWF data sets, see the documentation of the zenodo dataset.

For reproducibility, the data is to be downloaded from zenodo and placed in a folder named 'data' at the root of this repository. The analysis code expects all respective data files to be available in ./data/.

Prediction models

We compare the following AI weather prediction models:

For GraphCast, we use the version 'GraphCast' released on GitHub (this should not be confused with 'GraphCast_small' or 'GraphCast_operational'). For PanguWeather we used a combination of the authors' 6h and 24h models, following the "hierarchical temporal aggregation strategy" they developed.

Shapefiles

For the case study on the 2023 South Asia humid heatwave, we used the shapefiles contained in the Shapefiles/ subdirectory to mask a region that is comparable to the study "Extreme humid heat in South Asia in April 2023, largely driven by climate change, detrimental to vulnerable and disadvantaged communities" (2023), [paper]. We use country boundaries from the “World Administrative Boundaries - Countries and Territories” (link, Open Government License 3.0) data set by the World Food Programme and for the India-Bangladesh region we additionally use the (1976-2000) map of “World Maps of the Köppen-Geiger Climate Classification” (link, Creative Commons Attribution 4.0).

About

The repository documents code for "Validating Deep-Learning Weather Forecast Models on Recent High-Impact Extreme Events " https://doi.org/10.1175/AIES-D-24-0033.1

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •