2022
Abstract Gridded meteorological estimates are essential for many applications. Most existing meteorological datasets are deterministic and have limitations in representing the inherent uncertainties from both the data and methodology used to create gridded products. We develop the Ensemble Meteorological Dataset for Planet Earth (EM-Earth) for precipitation, mean daily temperature, daily temperature range, and dewpoint temperature at 0.1° spatial resolution over global land areas from 1950 to 2019. EM-Earth provides hourly/daily deterministic estimates, and daily probabilistic estimates (25 ensemble members), to meet the diverse requirements of hydrometeorological applications. To produce EM-Earth, we first developed a station-based Serially Complete Earth (SC-Earth) dataset, which removes the temporal discontinuities in raw station observations. Then, we optimally merged SC-Earth station data and ERA5 estimates to generate EM-Earth deterministic estimates and their uncertainties. The EM-Earth ensemble members are produced by sampling from parametric probability distributions using spatiotemporally correlated random fields. The EM-Earth dataset is evaluated by leave-one-out validation, using independent evaluation stations, and comparing it with many widely used datasets. The results show that EM-Earth is better in Europe, North America, and Oceania than in Africa, Asia, and South America, mainly due to differences in the available stations and differences in climate conditions. Probabilistic spatial meteorological datasets are particularly valuable in regions with large meteorological uncertainties, where almost all existing deterministic datasets face great challenges in obtaining accurate estimates.
The Köppen-Geiger (KG) climate classification has been widely used to determine the climate at global and regional scales using precipitation and temperature data. KG maps are typically developed using a single product; however, uncertainties in KG climate types resulting from different precipitation and temperature datasets have not been explored in detail. Here, we assess seven global datasets to show uncertainties in KG classification from 1980 to 2017. Using a pairwise comparison at global and zonal scales, we quantify the similarity among the seven KG maps. Gauge- and reanalysis-based KG maps have a notable difference. Spatially, the highest and lowest similarity is observed for the North and South Temperate zones, respectively. Notably, 17% of grids among the seven maps show variations even in the major KG climate types, while 35% of grids are described by more than one KG climate subtype. Strong uncertainty is observed in south Asia, central and south Africa, western America, and northeastern Australia. We created two KG master maps (0.5° resolution) by merging the climate maps directly and by combining the precipitation and temperature data from the seven datasets. These master maps are more robust than the individual ones showing coherent spatial patterns. This study reveals the large uncertainty in climate classification and offers two robust KG maps that may help to better evaluate historical climate and quantify future climate shifts.
DOI
bib
abs
Community Workflows to Advance Reproducibility in Hydrologic Modeling: Separating Model‐Agnostic and Model‐Specific Configuration Steps in Applications of Large‐Domain Hydrologic Models
Wouter Knoben,
Martyn Clark,
Jerad Bales,
Andrew Bennett,
Shervan Gharari,
Christopher B. Marsh,
Bart Nijssen,
Alain Pietroniro,
Raymond J. Spiteri,
Guoqiang Tang,
David G. Tarboton,
Andrew W. Wood
Water Resources Research, Volume 58, Issue 11
Despite the proliferation of computer-based research on hydrology and water resources, such research is typically poorly reproducible. Published studies have low reproducibility due to incomplete availability of data and computer code, and a lack of documentation of workflow processes. This leads to a lack of transparency and efficiency because existing code can neither be quality controlled nor reused. Given the commonalities between existing process-based hydrologic models in terms of their required input data and preprocessing steps, open sharing of code can lead to large efficiency gains for the modeling community. Here, we present a model configuration workflow that provides full reproducibility of the resulting model instantiations in a way that separates the model-agnostic preprocessing of specific data sets from the model-specific requirements that models impose on their input files. We use this workflow to create large-domain (global and continental) and local configurations of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model connected to the mizuRoute routing model. These examples show how a relatively complex model setup over a large domain can be organized in a reproducible and structured way that has the potential to accelerate advances in hydrologic modeling for the community as a whole. We provide a tentative blueprint of how community modeling initiatives can be built on top of workflows such as this. We term our workflow the “Community Workflows to Advance Reproducibility in Hydrologic Modeling” (CWARHM; pronounced “swarm”).
2021
Abstract Stations are an important source of meteorological data, but often suffer from missing values and short observation periods. Gap filling is widely used to generate serially complete datasets (SCDs), which are subsequently used to produce gridded meteorological estimates. However, the value of SCDs in spatial interpolation is scarcely studied. Based on our recent efforts to develop a SCD over North America (SCDNA), we explore the extent to which gap filling improves gridded precipitation and temperature estimates. We address two specific questions: (1) Can SCDNA improve the statistical accuracy of gridded estimates in North America? (2) Can SCDNA improve estimates of trends on gridded data? In addressing these questions, we also evaluate the extent to which results depend on the spatial density of the station network and the spatial interpolation methods used. Results show that the improvement in statistical interpolation due to gap filling is more obvious for precipitation, followed by minimum temperature and maximum temperature. The improvement is larger when the station network is sparse and when simpler interpolation methods are used. SCDs can also notably reduce the uncertainties in spatial interpolation. Our evaluation across North America from 1979 to 2018 demonstrates that SCDs improve the accuracy of interpolated estimates for most stations and days. SCDNA-based interpolation also obtains better trend estimation than observation-based interpolation. This occurs because stations used for interpolation could change during a specific period, causing changepoints in interpolated temperature estimates and affect the long-term trends of observation-based interpolation, which can be avoided using SCDNA. Overall, SCDs improve the performance of gridded precipitation and temperature estimates.
Abstract Stations are an important source of meteorological data, but often suffer from missing values and short observation periods. Gap filling is widely used to generate serially complete datasets (SCDs), which are subsequently used to produce gridded meteorological estimates. However, the value of SCDs in spatial interpolation is scarcely studied. Based on our recent efforts to develop a SCD over North America (SCDNA), we explore the extent to which gap filling improves gridded precipitation and temperature estimates. We address two specific questions: (1) Can SCDNA improve the statistical accuracy of gridded estimates in North America? (2) Can SCDNA improve estimates of trends on gridded data? In addressing these questions, we also evaluate the extent to which results depend on the spatial density of the station network and the spatial interpolation methods used. Results show that the improvement in statistical interpolation due to gap filling is more obvious for precipitation, followed by minimum temperature and maximum temperature. The improvement is larger when the station network is sparse and when simpler interpolation methods are used. SCDs can also notably reduce the uncertainties in spatial interpolation. Our evaluation across North America from 1979 to 2018 demonstrates that SCDs improve the accuracy of interpolated estimates for most stations and days. SCDNA-based interpolation also obtains better trend estimation than observation-based interpolation. This occurs because stations used for interpolation could change during a specific period, causing changepoints in interpolated temperature estimates and affect the long-term trends of observation-based interpolation, which can be avoided using SCDNA. Overall, SCDs improve the performance of gridded precipitation and temperature estimates.
DOI
bib
abs
The Abuse of Popular Performance Metrics in Hydrologic Modeling
Martyn Clark,
Richard M. Vogel,
Jonathan Lamontagne,
Naoki Mizukami,
Wouter Knoben,
Guoqiang Tang,
Shervan Gharari,
Jim Freer,
Paul H. Whitfield,
Kevin Shook,
Simon Michael Papalexiou,
Martyn Clark,
Richard M. Vogel,
Jonathan Lamontagne,
Naoki Mizukami,
Wouter Knoben,
Guoqiang Tang,
Shervan Gharari,
Jim Freer,
Paul H. Whitfield,
Kevin Shook,
Simon Michael Papalexiou
Water Resources Research, Volume 57, Issue 9
The goal of this commentary is to critically evaluate the use of popular performance metrics in hydrologic modeling. We focus on the Nash-Sutcliffe Efficiency (NSE) and the Kling-Gupta Efficiency (KGE) metrics, which are both widely used in hydrologic research and practice around the world. Our specific objectives are: (a) to provide tools that quantify the sampling uncertainty in popular performance metrics; (b) to quantify sampling uncertainty in popular performance metrics across a large sample of catchments; and (c) to prescribe the further research that is, needed to improve the estimation, interpretation, and use of popular performance metrics in hydrologic modeling. Our large-sample analysis demonstrates that there is substantial sampling uncertainty in the NSE and KGE estimators. This occurs because the probability distribution of squared errors between model simulations and observations has heavy tails, meaning that performance metrics can be heavily influenced by just a few data points. Our results highlight obvious (yet ignored) abuses of performance metrics that contaminate the conclusions of many hydrologic modeling studies: It is essential to quantify the sampling uncertainty in performance metrics when justifying the use of a model for a specific purpose and when comparing the performance of competing models.
DOI
bib
abs
The Abuse of Popular Performance Metrics in Hydrologic Modeling
Martyn Clark,
Richard M. Vogel,
Jonathan Lamontagne,
Naoki Mizukami,
Wouter Knoben,
Guoqiang Tang,
Shervan Gharari,
Jim Freer,
Paul H. Whitfield,
Kevin Shook,
Simon Michael Papalexiou,
Martyn Clark,
Richard M. Vogel,
Jonathan Lamontagne,
Naoki Mizukami,
Wouter Knoben,
Guoqiang Tang,
Shervan Gharari,
Jim Freer,
Paul H. Whitfield,
Kevin Shook,
Simon Michael Papalexiou
Water Resources Research, Volume 57, Issue 9
The goal of this commentary is to critically evaluate the use of popular performance metrics in hydrologic modeling. We focus on the Nash-Sutcliffe Efficiency (NSE) and the Kling-Gupta Efficiency (KGE) metrics, which are both widely used in hydrologic research and practice around the world. Our specific objectives are: (a) to provide tools that quantify the sampling uncertainty in popular performance metrics; (b) to quantify sampling uncertainty in popular performance metrics across a large sample of catchments; and (c) to prescribe the further research that is, needed to improve the estimation, interpretation, and use of popular performance metrics in hydrologic modeling. Our large-sample analysis demonstrates that there is substantial sampling uncertainty in the NSE and KGE estimators. This occurs because the probability distribution of squared errors between model simulations and observations has heavy tails, meaning that performance metrics can be heavily influenced by just a few data points. Our results highlight obvious (yet ignored) abuses of performance metrics that contaminate the conclusions of many hydrologic modeling studies: It is essential to quantify the sampling uncertainty in performance metrics when justifying the use of a model for a specific purpose and when comparing the performance of competing models.
Abstract Meteorological data from ground stations suffer from temporal discontinuities caused by missing values and short measurement periods. Gap-filling and reconstruction techniques have proven to be effective in producing serially complete station datasets (SCDs) that are used for a myriad of meteorological applications (e.g., developing gridded meteorological datasets and validating models). To our knowledge, all SCDs are developed at regional scales. In this study, we developed the serially complete Earth (SC-Earth) dataset, which provides daily precipitation, mean temperature, temperature range, dewpoint temperature, and wind speed data from 1950 to 2019. SC-Earth utilizes raw station data from the Global Historical Climatology Network–Daily (GHCN-D) and the Global Surface Summary of the Day (GSOD). A unified station repository is generated based on GHCN-D and GSOD after station merging and strict quality control. ERA5 is optimally matched with station data considering the time shift issue and then used to assist the global gap filling. SC-Earth is generated by merging estimates from 15 strategies based on quantile mapping, spatial interpolation, machine learning, and multistrategy merging. The final estimates are bias corrected using a combination of quantile mapping and quantile delta mapping. Comprehensive validation demonstrates that SC-Earth has high accuracy around the globe, with degraded quality in the tropics and oceanic islands due to sparse station networks, strong spatial precipitation gradients, and degraded ERA5 estimates. Meanwhile, SC-Earth inherits potential limitations such as inhomogeneity and precipitation undercatch from raw station data, which may affect its application in some cases. Overall, the high-quality and high-density SC-Earth dataset will benefit research in fields of hydrology, ecology, meteorology, and climate. The dataset is available at https://zenodo.org/record/4762586 .
Abstract Meteorological data from ground stations suffer from temporal discontinuities caused by missing values and short measurement periods. Gap-filling and reconstruction techniques have proven to be effective in producing serially complete station datasets (SCDs) that are used for a myriad of meteorological applications (e.g., developing gridded meteorological datasets and validating models). To our knowledge, all SCDs are developed at regional scales. In this study, we developed the serially complete Earth (SC-Earth) dataset, which provides daily precipitation, mean temperature, temperature range, dewpoint temperature, and wind speed data from 1950 to 2019. SC-Earth utilizes raw station data from the Global Historical Climatology Network–Daily (GHCN-D) and the Global Surface Summary of the Day (GSOD). A unified station repository is generated based on GHCN-D and GSOD after station merging and strict quality control. ERA5 is optimally matched with station data considering the time shift issue and then used to assist the global gap filling. SC-Earth is generated by merging estimates from 15 strategies based on quantile mapping, spatial interpolation, machine learning, and multistrategy merging. The final estimates are bias corrected using a combination of quantile mapping and quantile delta mapping. Comprehensive validation demonstrates that SC-Earth has high accuracy around the globe, with degraded quality in the tropics and oceanic islands due to sparse station networks, strong spatial precipitation gradients, and degraded ERA5 estimates. Meanwhile, SC-Earth inherits potential limitations such as inhomogeneity and precipitation undercatch from raw station data, which may affect its application in some cases. Overall, the high-quality and high-density SC-Earth dataset will benefit research in fields of hydrology, ecology, meteorology, and climate. The dataset is available at https://zenodo.org/record/4762586 .
DOI
bib
abs
EMDNA: an Ensemble Meteorological Dataset for North America
Guoqiang Tang,
Martyn Clark,
Simon Michael Papalexiou,
Andrew J. Newman,
Andrew W. Wood,
Dominique Brunet,
Paul H. Whitfield,
Guoqiang Tang,
Martyn Clark,
Simon Michael Papalexiou,
Andrew J. Newman,
Andrew W. Wood,
Dominique Brunet,
Paul H. Whitfield
Earth System Science Data, Volume 13, Issue 7
Abstract. Probabilistic methods are useful to estimate the uncertainty in spatial meteorological fields (e.g., the uncertainty in spatial patterns of precipitation and temperature across large domains). In ensemble probabilistic methods, “equally plausible” ensemble members are used to approximate the probability distribution, hence the uncertainty, of a spatially distributed meteorological variable conditioned to the available information. The ensemble members can be used to evaluate the impact of uncertainties in spatial meteorological fields for a myriad of applications. This study develops the Ensemble Meteorological Dataset for North America (EMDNA). EMDNA has 100 ensemble members with daily precipitation amount, mean daily temperature, and daily temperature range at 0.1∘ spatial resolution (approx. 10 km grids) from 1979 to 2018, derived from a fusion of station observations and reanalysis model outputs. The station data used in EMDNA are from a serially complete dataset for North America (SCDNA) that fills gaps in precipitation and temperature measurements using multiple strategies. Outputs from three reanalysis products are regridded, corrected, and merged using Bayesian model averaging. Optimal interpolation (OI) is used to merge station- and reanalysis-based estimates. EMDNA estimates are generated using spatiotemporally correlated random fields to sample from the OI estimates. Evaluation results show that (1) the merged reanalysis estimates outperform raw reanalysis estimates, particularly in high latitudes and mountainous regions; (2) the OI estimates are more accurate than the reanalysis and station-based regression estimates, with the most notable improvements for precipitation evident in sparsely gauged regions; and (3) EMDNA estimates exhibit good performance according to the diagrams and metrics used for probabilistic evaluation. We discuss the limitations of the current framework and highlight that further research is needed to improve ensemble meteorological datasets. Overall, EMDNA is expected to be useful for hydrological and meteorological applications in North America. The entire dataset and a teaser dataset (a small subset of EMDNA for easy download and preview) are available at https://doi.org/10.20383/101.0275 (Tang et al., 2020a).
DOI
bib
abs
EMDNA: an Ensemble Meteorological Dataset for North America
Guoqiang Tang,
Martyn Clark,
Simon Michael Papalexiou,
Andrew J. Newman,
Andrew W. Wood,
Dominique Brunet,
Paul H. Whitfield,
Guoqiang Tang,
Martyn Clark,
Simon Michael Papalexiou,
Andrew J. Newman,
Andrew W. Wood,
Dominique Brunet,
Paul H. Whitfield
Earth System Science Data, Volume 13, Issue 7
Abstract. Probabilistic methods are useful to estimate the uncertainty in spatial meteorological fields (e.g., the uncertainty in spatial patterns of precipitation and temperature across large domains). In ensemble probabilistic methods, “equally plausible” ensemble members are used to approximate the probability distribution, hence the uncertainty, of a spatially distributed meteorological variable conditioned to the available information. The ensemble members can be used to evaluate the impact of uncertainties in spatial meteorological fields for a myriad of applications. This study develops the Ensemble Meteorological Dataset for North America (EMDNA). EMDNA has 100 ensemble members with daily precipitation amount, mean daily temperature, and daily temperature range at 0.1∘ spatial resolution (approx. 10 km grids) from 1979 to 2018, derived from a fusion of station observations and reanalysis model outputs. The station data used in EMDNA are from a serially complete dataset for North America (SCDNA) that fills gaps in precipitation and temperature measurements using multiple strategies. Outputs from three reanalysis products are regridded, corrected, and merged using Bayesian model averaging. Optimal interpolation (OI) is used to merge station- and reanalysis-based estimates. EMDNA estimates are generated using spatiotemporally correlated random fields to sample from the OI estimates. Evaluation results show that (1) the merged reanalysis estimates outperform raw reanalysis estimates, particularly in high latitudes and mountainous regions; (2) the OI estimates are more accurate than the reanalysis and station-based regression estimates, with the most notable improvements for precipitation evident in sparsely gauged regions; and (3) EMDNA estimates exhibit good performance according to the diagrams and metrics used for probabilistic evaluation. We discuss the limitations of the current framework and highlight that further research is needed to improve ensemble meteorological datasets. Overall, EMDNA is expected to be useful for hydrological and meteorological applications in North America. The entire dataset and a teaser dataset (a small subset of EMDNA for easy download and preview) are available at https://doi.org/10.20383/101.0275 (Tang et al., 2020a).
2020
Abstract. Probabilistic methods are very useful to estimate the spatial variability in meteorological conditions (e.g., spatial patterns of precipitation and temperature across large domains). In ensemble probabilistic methods, equally plausible ensemble members are used to approximate the probability distribution, hence uncertainty, of a spatially distributed meteorological variable conditioned on the available information. The ensemble can be used to evaluate the impact of the uncertainties in a myriad of applications. This study develops the Ensemble Meteorological Dataset for North America (EMDNA). EMDNA has 100 members with daily precipitation amount, mean daily temperature, and daily temperature range at 0.1° spatial resolution from 1979 to 2018, derived from a fusion of station observations and reanalysis model outputs. The station data used in EMDNA are from a serially complete dataset for North America (SCDNA) that fills gaps in precipitation and temperature measurements using multiple strategies. Outputs from three reanalysis products are regridded, corrected, and merged using the Bayesian Model Averaging. Optimal Interpolation (OI) is used to merge station- and reanalysis-based estimates. EMDNA estimates are generated based on OI estimates and spatiotemporally correlated random fields. Evaluation results show that (1) the merged reanalysis estimates outperform raw reanalysis estimates, particularly in high latitudes and mountainous regions; (2) the OI estimates are more accurate than the reanalysis and station-based regression estimates, with the most notable improvement for precipitation occurring in sparsely gauged regions; and (3) EMDNA estimates exhibit good performance according to the diagrams and metrics used for probabilistic evaluation. We also discuss the limitations of the current framework and highlight that persistent efforts are needed to further develop probabilistic methods and ensemble datasets. Overall, EMDNA is expected to be useful for hydrological and meteorological applications in North America. The whole dataset and a teaser dataset (a small subset of EMDNA for easy download and preview) are available at https://doi.org/10.20383/101.0275 (Tang et al., 2020a).
DOI
bib
abs
Climate Changes and Their Teleconnections With ENSO Over the Last 55 Years, 1961–2015, in Floods‐Dominated Basin, Jiangxi Province, China
Hongyi Li,
Xiaoyong Zhong,
Ziqiang Ma,
Guoqiang Tang,
Leiding Ding,
Xinxin Sui,
Jintao Xu,
Yu He
Earth and Space Science, Volume 7, Issue 3
The relative effect of climate change and El Niño–Southern Oscillation (ENSO) is essential not only for understanding the hydrological mechanism over Jiangxi province in China but also for local water resources management as well as flood control. This study quantitatively researched in-depth information on climate change in Jiangxi using the up-to-date “ground truth” precipitation and temperature data, the Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE, 1961–2015, 0.25°) data; analyzed the connections between ENSO and climate factors (including precipitation and temperature); and discussed the relationships between the ENSO and climate change. The main findings of this study were (1) during the period of 1961–2015, annual precipitation and temperature generally increased at a rate of 2.68 mm/year and 0.16 °C/10a, respectively; (2) the precipitation temporal trends have significant spatial differences. For example, the high precipitation increasing rates occurred in northern Jiangxi province in summer, while the large decreasing rates happened in most regions of Jiangxi province in spring; (3) an abrupt temperature change was detected around 1984, with general decreasing trends and increasing trends in 1961–1984 and 1984–2015, respectively; (4) ENSO had significant impacts on precipitation changes over Jiangxi province, for example; the El Niño events, beginning in April and May, were likely to enlarge the amounts of precipitation in the following summer, and the El Niño events beginning in October were likely to enlarge the precipitation amounts in the following spring and summer; and (5) the El Niño events, starting in the second half of the year, were likely to raise the temperature in the winter and the following spring. These findings would provide valuable information for better understanding the climate change issues over Jiangxi province.
DOI
bib
abs
Cross-Examination of Similarity, Difference and Deficiency of Gauge, Radar and Satellite Precipitation Measuring Uncertainties for Extreme Events Using Conventional Metrics and Multiplicative Triple Collocation
Zhi Li,
Mengye Chen,
Shang Gao,
Zhen Hong,
Guoqiang Tang,
Yixin Wen,
Jonathan J. Gourley,
Yang Hong
Remote Sensing, Volume 12, Issue 8
Quantifying uncertainties of precipitation estimation, especially in extreme events, could benefit early warning of water-related hazards like flash floods and landslides. Rain gauges, weather radars, and satellites are three mainstream data sources used in measuring precipitation but have their own inherent advantages and deficiencies. With a focus on extremes, the overarching goal of this study is to cross-examine the similarities and differences of three state-of-the-art independent products (Muti-Radar Muti-Sensor Quantitative Precipitation Estimates, MRMS; National Center for Environmental Prediction gridded gauge-only hourly precipitation product, NCEP; Integrated Multi-satellitE Retrievals for GPM, IMERG), with both traditional metrics and the Multiplicative Triple Collection (MTC) method during Hurricane Harvey and multiple Tropical Cyclones. The results reveal that: (a) the consistency of cross-examination results against traditional metrics approves the applicability of MTC in extreme events; (b) the consistency of cross-events of MTC evaluation results also suggests its robustness across individual storms; (c) all products demonstrate their capacity of capturing the spatial and temporal variability of the storm structures while also magnifying respective inherent deficiencies; (d) NCEP and IMERG likely underestimate while MRMS overestimates the storm total accumulation, especially for the 500-year return Hurricane Harvey; (e) both NCEP and IMERG underestimate extreme rainrates (>= 90 mm/h) likely due to device insensitivity or saturation while MRMS maintains robust across the rainrate range; (g) all three show inherent deficiencies in capturing the storm core of Harvey possibly due to device malfunctions with the NCEP gauges, relative low spatiotemporal resolution of IMERG, and the unusual “hot” MRMS radar signals. Given the unknown ground reference assumption of MTC, this study suggests that MRMS has the best overall performance. The similarities, differences, advantages, and deficiencies revealed in this study could guide the users for emergency response and motivate the community not only to improve the respective sensor/algorithm but also innovate multidata merging methods for one best possible product, specifically suitable for extreme storm events.
Abstract. Station-based serially complete datasets (SCDs) of precipitation and temperature observations are important for hydrometeorological studies. Motivated by the lack of serially complete station observations for North America, this study seeks to develop an SCD from 1979 to 2018 from station data. The new SCD for North America (SCDNA) includes daily precipitation, minimum temperature (Tmin), and maximum temperature (Tmax) data for 27 276 stations. Raw meteorological station data were obtained from the Global Historical Climate Network Daily (GHCN-D), the Global Surface Summary of the Day (GSOD), Environment and Climate Change Canada (ECCC), and a compiled station database in Mexico. Stations with at least 8-year-long records were selected, which underwent location correction and were subjected to strict quality control. Outputs from three reanalysis products (ERA5, JRA-55, and MERRA-2) provided auxiliary information to estimate station records. Infilling during the observation period and reconstruction beyond the observation period were accomplished by combining estimates from 16 strategies (variants of quantile mapping, spatial interpolation, and machine learning). A sensitivity experiment was conducted by assuming that 30 % of observations from stations were missing – this enabled independent validation and provided a reference for reconstruction. Quantile mapping and mean value corrections were applied to the final estimates. The median Kling–Gupta efficiency (KGE′) values of the final SCDNA for all stations are 0.90, 0.98, and 0.99 for precipitation, Tmin, and Tmax, respectively. The SCDNA is closer to station observations than the four benchmark gridded products and can be used in applications that require either quality-controlled meteorological station observations or reconstructed long-term estimates for analysis and modeling. The dataset is available at https://doi.org/10.5281/zenodo.3735533 (Tang et al., 2020).
Abstract The Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (IMERG) produces the latest generation of satellite precipitation estimates and has been widely used since its release in 2014. IMERG V06 provides global rainfall and snowfall data beginning from 2000. This study comprehensively analyzes the quality of the IMERG product at daily and hourly scales in China from 2000 to 2018 with special attention paid to snowfall estimates. The performance of IMERG is compared with nine satellite and reanalysis products (TRMM 3B42, CMORPH, PERSIANN-CDR, GSMaP, CHIRPS, SM2RAIN, ERA5, ERA-Interim, and MERRA2). Results show that the IMERG product outperforms other datasets, except the Global Satellite Mapping of Precipitation (GSMaP), which uses daily-scale station data to adjust satellite precipitation estimates. The monthly-scale station data adjustment used by IMERG naturally has a limited impact on estimates of precipitation occurrence and intensity at the daily and hourly time scales. The quality of IMERG has improved over time, attributed to the increasing number of passive microwave samples. SM2RAIN, ERA5, and MERRA2 also exhibit increasing accuracy with time that may cause variable performance in climatological studies. Even relying on monthly station data adjustments, IMERG shows good performance in both accuracy metrics at hourly time scales and the representation of diurnal cycles. In contrast, although ERA5 is acceptable at the daily scale, it degrades at the hourly scale due to the limitation in reproducing the peak time, magnitude and variation of diurnal cycles. IMERG underestimates snowfall compared with gauge and reanalysis data. The triple collocation analysis suggests that IMERG snowfall is worse than reanalysis and gauge data, which partly results in the degraded quality of IMERG in cold climates. This study demonstrates new findings on the uncertainties of various precipitation products and identifies potential directions for algorithm improvement. The results of this study will be useful for both developers and users of satellite rainfall products.
Abstract Global gridded precipitation products have proven essential for many applications ranging from hydrological modeling and climate model validation to natural hazard risk assessment. They provide a global picture of how precipitation varies across time and space, specifically in regions where ground-based observations are scarce. While the application of global precipitation products has become widespread, there is limited knowledge on how well these products represent the magnitude and frequency of extreme precipitation—the key features in triggering flood hazards. Here, five global precipitation datasets (MSWEP, CFSR, CPC, PERSIANN-CDR, and WFDEI) are compared to each other and to surface observations. The spatial variability of relatively high precipitation events (tail heaviness) and the resulting discrepancy among datasets in the predicted precipitation return levels were evaluated for the time period 1979–2017. The analysis shows that 1) these products do not provide a consistent representation of the behavior of extremes as quantified by the tail heaviness, 2) there is strong spatial variability in the tail index, 3) the spatial patterns of the tail heaviness generally match the Köppen–Geiger climate classification, and 4) the predicted return levels for 100 and 1000 years differ significantly among the gridded products. More generally, our findings reveal shortcomings of global precipitation products in representing extremes and highlight that there is no single global product that performs best for all regions and climates.