Andrew J. Newman


2022

DOI bib
New projections of 21st century climate and hydrology for Alaska and Hawaiʻi
Naoki Mizukami, Andrew J. Newman, Jeremy S. Littell, Thomas W. Giambelluca, Andrew W. Wood, E. D. Gutmann, Joseph Hamman, Diana R. Gergel, Bart Nijssen, Martyn P. Clark, Jeffrey R. Arnold
Climate Services, Volume 27

In the United States, high-resolution, century-long, hydroclimate projection datasets have been developed for water resources planning, focusing on the contiguous United States (CONUS) domain. However, there are few statewide hydroclimate projection datasets available for Alaska and Hawaiʻi. The limited information on hydroclimatic change motivates developing hydrologic scenarios from 1950 to 2099 using climate-hydrology impact modeling chains consisting of multiple statistically downscaled climate projections as input to hydrologic model simulations for both states. We adopt an approach similar to the previous CONUS hydrologic assessments where: 1) we select the outputs from ten global climate models (GCM) from the Coupled Model Intercomparison Project Phase 5 with Representative Concentration Pathways 4.5 and 8.5; 2) we perform statistical downscaling to generate climate input data for hydrologic models (12-km grid-spacing for Alaska and 1-km for Hawaiʻi); and 3) we perform process-based hydrologic model simulations. For Alaska, we have advanced the hydrologic model configuration from CONUS by using the full water-energy balance computation, frozen soils and a simple glacier model. The simulations show that robust warming and increases in precipitation produce runoff increases for most of Alaska, with runoff reductions in the currently glacierized areas in Southeast Alaska. For Hawaiʻi, we produce the projections at high resolution (1 km) which highlight high spatial variability of climate variables across the state, and a large spread of runoff across the GCMs is driven by a large precipitation spread across the GCMs. Our new ensemble datasets assist with state-wide climate adaptation and other water planning.

DOI bib
Improving station-based ensemble surface meteorological analyses using numerical weather prediction: A case study of the Oroville Dam crisis precipitation event
Patrick Bunn, Andrew W. Wood, Andrew J. Newman, Hsin I. Chang, Christopher L. Castro, Martyn P. Clark, Jeffrey R. Arnold
Journal of Hydrometeorology

Abstract Surface meteorological analyses serve a wide range of research and applications, including forcing inputs for hydrological and ecological models, climate analysis, and resource and emergency management. Quantifying uncertainty in such analyses would extend their utility for probabilistic hydrologic prediction and climate risk applications. With this motivation, we enhance and evaluate an approach for generating ensemble analyses of precipitation and temperature through the fusion of station observations, terrain information, and numerical weather prediction simulations of surface climate fields. In particular, we expand a spatial regression in which static terrain attributes serve as predictors for spatially distributed 1/16th degree daily surface precipitation and temperature by including forecast outputs from the High-Resolution Rapid Refresh (HRRR) numerical weather prediction model as additional predictors. We demonstrate the approach for a case study domain of California, focusing on the meteorological conditions leading to the 2017 flood and spillway failure event at Lake Oroville. The approach extends the spatial regression capability of the Gridded Meteorological Ensemble Tool (GMET) and also adds cross-validation to the uncertainty estimation component, enabling the use of predictive rather than calibration uncertainty. In evaluation against out-of-sample station observations, the HRRR-based predictors alone are found to be skillful for the study setting, leading to overall improvements in the enhanced GMET meteorological analyses. The methodology and associated tool represent a promising method for generating meteorological surface analyses for both research-oriented and operational applications, as well as a general strategy for merging in situ and gridded observations.

2021

DOI bib
Leveraging ensemble meteorological forcing data to improve parameter estimation of hydrologic models
Hongli Liu, Bryan A. Tolson, Andrew J. Newman, Andrew W. Wood
Hydrological Processes, Volume 35, Issue 11

As continental to global scale high-resolution meteorological datasets continue to be developed, there are sufficient meteorological datasets available now for modellers to construct a historical forcing ensemble. The forcing ensemble can be a collection of multiple deterministic meteorological datasets or come from an ensemble meteorological dataset. In hydrological model calibration, the forcing ensemble can be used to represent forcing data uncertainty. This study examines the potential of using the forcing ensemble to identify more robust parameters through model calibration. Specifically, we compare an ensemble forcing-based calibration with two deterministic forcing-based calibrations and investigate their flow simulation and parameter estimation properties and the ability to resist poor-quality forcings. The comparison experiment is conducted with a six-parameter hydrological model for 30 synthetic studies and 20 real data studies to provide a better assessment of the average performance of the deterministic and ensemble forcing-based calibrations. Results show that the ensemble forcing-based calibration generates parameter estimates that are less biased and have higher frequency of covering the true parameter values than the deterministic forcing-based calibration does. Using a forcing ensemble in model calibration reduces the risk of inaccurate flow simulation caused by poor-quality meteorological inputs, and improves the reliability and overall simulation skill of ensemble simulation results. The poor-quality meteorological inputs can be effectively filtered out via our ensemble forcing-based calibration methodology and thus discarded in any post-calibration model applications. The proposed ensemble forcing-based calibration method can be considered as a more generalized framework to include parameter and forcing uncertainties in model calibration.

DOI bib
Identifying sensitivities in flood frequency analyses using a stochastic hydrologic modeling system
Andrew J. Newman, Amanda G. Stone, Manabendra Saharia, K. D. Holman, Nans Addor, Martyn P. Clark
Hydrology and Earth System Sciences, Volume 25, Issue 10

Abstract. This study employs a stochastic hydrologic modeling framework to evaluate the sensitivity of flood frequency analyses to different components of the hydrologic modeling chain. The major components of the stochastic hydrologic modeling chain, including model structure, model parameter estimation, initial conditions, and precipitation inputs were examined across return periods from 2 to 100 000 years at two watersheds representing different hydroclimates across the western USA. A total of 10 hydrologic model structures were configured, calibrated, and run within the Framework for Understanding Structural Errors (FUSE) modular modeling framework for each of the two watersheds. Model parameters and initial conditions were derived from long-term calibrated simulations using a 100 member historical meteorology ensemble. A stochastic event-based hydrologic modeling workflow was developed using the calibrated models in which millions of flood event simulations were performed for each basin. The analysis of variance method was then used to quantify the relative contributions of model structure, model parameters, initial conditions, and precipitation inputs to flood magnitudes for different return periods. Results demonstrate that different components of the modeling chain have different sensitivities for different return periods. Precipitation inputs contribute most to the variance of rare floods, while initial conditions are most influential for more frequent events. However, the hydrological model structure and structure–parameter interactions together play an equally important role in specific cases, depending on the basin characteristics and type of flood metric of interest. This study highlights the importance of critically assessing model underpinnings, understanding flood generation processes, and selecting appropriate hydrological models that are consistent with our understanding of flood generation processes.

DOI bib
EMDNA: an Ensemble Meteorological Dataset for North America
Guoqiang Tang, Martyn P. Clark, Simon Michael Papalexiou, Andrew J. Newman, Andy Wood, Dominique Brunet, Paul H. Whitfield
Earth System Science Data, Volume 13, Issue 7

Abstract. Probabilistic methods are useful to estimate the uncertainty in spatial meteorological fields (e.g., the uncertainty in spatial patterns of precipitation and temperature across large domains). In ensemble probabilistic methods, “equally plausible” ensemble members are used to approximate the probability distribution, hence the uncertainty, of a spatially distributed meteorological variable conditioned to the available information. The ensemble members can be used to evaluate the impact of uncertainties in spatial meteorological fields for a myriad of applications. This study develops the Ensemble Meteorological Dataset for North America (EMDNA). EMDNA has 100 ensemble members with daily precipitation amount, mean daily temperature, and daily temperature range at 0.1∘ spatial resolution (approx. 10 km grids) from 1979 to 2018, derived from a fusion of station observations and reanalysis model outputs. The station data used in EMDNA are from a serially complete dataset for North America (SCDNA) that fills gaps in precipitation and temperature measurements using multiple strategies. Outputs from three reanalysis products are regridded, corrected, and merged using Bayesian model averaging. Optimal interpolation (OI) is used to merge station- and reanalysis-based estimates. EMDNA estimates are generated using spatiotemporally correlated random fields to sample from the OI estimates. Evaluation results show that (1) the merged reanalysis estimates outperform raw reanalysis estimates, particularly in high latitudes and mountainous regions; (2) the OI estimates are more accurate than the reanalysis and station-based regression estimates, with the most notable improvements for precipitation evident in sparsely gauged regions; and (3) EMDNA estimates exhibit good performance according to the diagrams and metrics used for probabilistic evaluation. We discuss the limitations of the current framework and highlight that further research is needed to improve ensemble meteorological datasets. Overall, EMDNA is expected to be useful for hydrological and meteorological applications in North America. The entire dataset and a teaser dataset (a small subset of EMDNA for easy download and preview) are available at https://doi.org/10.20383/101.0275 (Tang et al., 2020a).

DOI bib
Hydroclimatic changes in Alaska portrayed by a high-resolution regional climate simulation
Andrew J. Newman, Andrew Monaghan, Martyn P. Clark, Kyoko Ikeda, Lulin Xue, E. D. Gutmann, Jeffrey R. Arnold
Climatic Change, Volume 164, Issue 1-2

The Arctic has been warming faster than the global average during recent decades, and trends are projected to continue through the twenty-first century. Analysis of climate change impacts across the Arctic using dynamical models has almost exclusively been limited to outputs from global climate models or coarser regional climate models. Coarse resolution simulations limit the representation of physical processes, particularly in areas of complex topography and high land-surface heterogeneity. Here, current climate reference and future regional climate model simulations based on the RCP8.5 scenario over Alaska at 4 km grid spacing are compared to identify changes in snowfall and snowpack. In general, results show increases in total precipitation, large decreases in snowfall fractional contribution over 30% in some areas, decreases in snowpack season length by 50–100 days in lower elevations and along the southern Alaskan coastline, and decreases in snow water equivalent. However, increases in snowfall and snowpack of sometimes greater than 20% are evident for some colder northern areas and at the highest elevations in southern Alaska. The most significant changes in snow cover and snowfall fractional contributions occur during the spring and fall seasons. Finally, the spatial pattern of winter temperatures above freezing has small-scale spatial features tied to the topography. Such areas would not be resolved with coarser resolution regional or global climate model simulations.

2020

DOI bib
EMDNA: Ensemble Meteorological Dataset for North America
Guoqiang Tang, Martyn P. Clark, Simon Michael Papalexiou, Andrew J. Newman, Andy Wood, V. Vionnet, Paul H. Whitfield

Abstract. Probabilistic methods are very useful to estimate the spatial variability in meteorological conditions (e.g., spatial patterns of precipitation and temperature across large domains). In ensemble probabilistic methods, equally plausible ensemble members are used to approximate the probability distribution, hence uncertainty, of a spatially distributed meteorological variable conditioned on the available information. The ensemble can be used to evaluate the impact of the uncertainties in a myriad of applications. This study develops the Ensemble Meteorological Dataset for North America (EMDNA). EMDNA has 100 members with daily precipitation amount, mean daily temperature, and daily temperature range at 0.1° spatial resolution from 1979 to 2018, derived from a fusion of station observations and reanalysis model outputs. The station data used in EMDNA are from a serially complete dataset for North America (SCDNA) that fills gaps in precipitation and temperature measurements using multiple strategies. Outputs from three reanalysis products are regridded, corrected, and merged using the Bayesian Model Averaging. Optimal Interpolation (OI) is used to merge station- and reanalysis-based estimates. EMDNA estimates are generated based on OI estimates and spatiotemporally correlated random fields. Evaluation results show that (1) the merged reanalysis estimates outperform raw reanalysis estimates, particularly in high latitudes and mountainous regions; (2) the OI estimates are more accurate than the reanalysis and station-based regression estimates, with the most notable improvement for precipitation occurring in sparsely gauged regions; and (3) EMDNA estimates exhibit good performance according to the diagrams and metrics used for probabilistic evaluation. We also discuss the limitations of the current framework and highlight that persistent efforts are needed to further develop probabilistic methods and ensemble datasets. Overall, EMDNA is expected to be useful for hydrological and meteorological applications in North America. The whole dataset and a teaser dataset (a small subset of EMDNA for easy download and preview) are available at https://doi.org/10.20383/101.0275 (Tang et al., 2020a).

DOI bib
SCDNA: a serially complete precipitation and temperature dataset for North America from 1979 to 2018
Guoqiang Tang, Martyn P. Clark, Andrew J. Newman, Andy Wood, Simon Michael Papalexiou, Vincent Vionnet, Paul H. Whitfield
Earth System Science Data, Volume 12, Issue 4

Abstract. Station-based serially complete datasets (SCDs) of precipitation and temperature observations are important for hydrometeorological studies. Motivated by the lack of serially complete station observations for North America, this study seeks to develop an SCD from 1979 to 2018 from station data. The new SCD for North America (SCDNA) includes daily precipitation, minimum temperature (Tmin⁡), and maximum temperature (Tmax⁡) data for 27 276 stations. Raw meteorological station data were obtained from the Global Historical Climate Network Daily (GHCN-D), the Global Surface Summary of the Day (GSOD), Environment and Climate Change Canada (ECCC), and a compiled station database in Mexico. Stations with at least 8-year-long records were selected, which underwent location correction and were subjected to strict quality control. Outputs from three reanalysis products (ERA5, JRA-55, and MERRA-2) provided auxiliary information to estimate station records. Infilling during the observation period and reconstruction beyond the observation period were accomplished by combining estimates from 16 strategies (variants of quantile mapping, spatial interpolation, and machine learning). A sensitivity experiment was conducted by assuming that 30 % of observations from stations were missing – this enabled independent validation and provided a reference for reconstruction. Quantile mapping and mean value corrections were applied to the final estimates. The median Kling–Gupta efficiency (KGE′) values of the final SCDNA for all stations are 0.90, 0.98, and 0.99 for precipitation, Tmin⁡, and Tmax⁡, respectively. The SCDNA is closer to station observations than the four benchmark gridded products and can be used in applications that require either quality-controlled meteorological station observations or reconstructed long-term estimates for analysis and modeling. The dataset is available at https://doi.org/10.5281/zenodo.3735533 (Tang et al., 2020).

DOI bib
Probabilistic Spatial Meteorological Estimates for Alaska and the Yukon
Andrew J. Newman, Martyn P. Clark, Andy Wood, Jeffrey R. Arnold
Journal of Geophysical Research: Atmospheres, Volume 125, Issue 22

It is challenging to develop observationally based spatial estimates of meteorology in Alaska and the Yukon. Complex topography, frozen precipitation undercatch, and extremely sparse in situ observations all limit our capability to produce accurate spatial estimates of meteorological conditions. In this Arctic environment, it is necessary to develop probabilistic estimates of precipitation and temperature that explicitly incorporate spatiotemporally varying uncertainty and bias corrections. In this paper we exploit the recently developed ensemble Climatologically Aided Interpolation (eCAI) system to produce daily historical estimates of precipitation and temperature across Alaska and the Yukon Territory at a 2 km grid spacing for the time period 1980–2013. We extend the previous eCAI method to address precipitation gauge undercatch and wetting loss, which is of high importance for this high-latitude region where much of the precipitation falls as snow. Leave-one-out cross-validation shows our ensemble has little bias in daily precipitation and mean temperature at the station locations, with an overestimate in the daily standard deviation of precipitation. The ensemble is statistically reliable compared to climatology and can discriminate precipitation events across different precipitation thresholds. Long-term mean loss adjusted precipitation is up to 36% greater than the unadjusted estimate in windy areas that receive a large fraction of frozen precipitation, primarily due to wind induced undercatch. Comparing the ensemble mean climatology of precipitation and temperature to PRISM and Daymet v3 shows large interproduct differences, particularly in precipitation across the complex terrain of southeast and northern Alaska.

DOI bib
TIER version 1.0: an open-source Topographically InformEd Regression (TIER) model to estimate spatial meteorological fields
Andrew J. Newman, Martyn P. Clark
Geoscientific Model Development, Volume 13, Issue 4

Abstract. This paper introduces the Topographically InformEd Regression (TIER) model, which uses terrain attributes in a regression framework to distribute in situ observations of precipitation and temperature to a grid. The framework enables our understanding of complex atmospheric processes (e.g., orographic precipitation) to be encoded into a statistical model in an easy-to-understand manner. TIER is developed in a modular fashion with key model parameters exposed to the user. This enables the user community to easily explore the impacts of our methodological choices made to distribute sparse, irregularly spaced observations to a grid in a systematic fashion. The modular design allows incorporating new capabilities in TIER. Intermediate processing variables are also output to provide a more complete understanding of the algorithm and any algorithmic changes. The framework also provides uncertainty estimates. This paper presents a brief model evaluation and demonstrates that the TIER algorithm is functioning as expected. Several variations in model parameters and changes in the distributed variables are described. A key conclusion is that seemingly small changes in a model parameter result in large changes to the final distributed fields and their associated uncertainty estimates.

2019

DOI bib
On the choice of calibration metrics for “high-flow” estimation using hydrologic models
Naoki Mizukami, Oldřich Rakovec, Andrew J. Newman, Martyn P. Clark, Andy Wood, Hoshin Gupta, Rohini Kumar
Hydrology and Earth System Sciences, Volume 23, Issue 6

Abstract. Calibration is an essential step for improving the accuracy of simulations generated using hydrologic models. A key modeling decision is selecting the performance metric to be optimized. It has been common to use squared error performance metrics, or normalized variants such as Nash–Sutcliffe efficiency (NSE), based on the idea that their squared-error nature will emphasize the estimates of high flows. However, we conclude that NSE-based model calibrations actually result in poor reproduction of high-flow events, such as the annual peak flows that are used for flood frequency estimation. Using three different types of performance metrics, we calibrate two hydrological models at a daily step, the Variable Infiltration Capacity (VIC) model and the mesoscale Hydrologic Model (mHM), and evaluate their ability to simulate high-flow events for 492 basins throughout the contiguous United States. The metrics investigated are (1) NSE, (2) Kling–Gupta efficiency (KGE) and its variants, and (3) annual peak flow bias (APFB), where the latter is an application-specific metric that focuses on annual peak flows. As expected, the APFB metric produces the best annual peak flow estimates; however, performance on other high-flow-related metrics is poor. In contrast, the use of NSE results in annual peak flow estimates that are more than 20 % worse, primarily due to the tendency of NSE to underestimate observed flow variability. On the other hand, the use of KGE results in annual peak flow estimates that are better than from NSE, owing to improved flow time series metrics (mean and variance), with only a slight degradation in performance with respect to other related metrics, particularly when a non-standard weighting of the components of KGE is used. Stochastically generated ensemble simulations based on model residuals show the ability to improve the high-flow metrics, regardless of the deterministic performances. However, we emphasize that improving the fidelity of streamflow dynamics from deterministically calibrated models is still important, as it may improve high-flow metrics (for the right reasons). Overall, this work highlights the need for a deeper understanding of performance metric behavior and design in relation to the desired goals of model calibration.

DOI bib
Methodological Intercomparisons of Station-Based Gridded Meteorological Products: Utility, Limitations, and Paths Forward
Andrew J. Newman, Martyn P. Clark, Ryan J. Longman, Thomas W. Giambelluca
Journal of Hydrometeorology, Volume 20, Issue 3

Abstract This study presents a gridded meteorology intercomparison using the State of Hawaii as a testbed. This is motivated by the goal to provide the broad user community with knowledge of interproduct differences and the reasons differences exist. More generally, the challenge of generating station-based gridded meteorological surfaces and the difficulties in attributing interproduct differences to specific methodological decisions are demonstrated. Hawaii is a useful testbed because it is traditionally underserved, yet meteorologically interesting and complex. In addition, several climatological and daily gridded meteorology datasets are now available, which are used extensively by the applications modeling community, thus an intercomparison enhances Hawaiian specific capabilities. We compare PRISM climatology and three daily datasets: new datasets from the University of Hawai‘i and the National Center for Atmospheric Research, and Daymet version 3 for precipitation and temperature variables only. General conclusions that have emerged are 1) differences in input station data significantly influence the product differences, 2) explicit prediction of precipitation occurrence is crucial across multiple metrics, and 3) attribution of differences to specific methodological choices is difficult and limits the usefulness of intercomparisons. Because generating gridded meteorological fields is an elaborate process with many methodological choices interacting in complex ways, future work should 1) develop modular frameworks that allows users to easily examine the breadth of methodological choices, 2) collate available nontraditional high-quality observational datasets for true out-of-sample validation and make them publicly available, and 3) define benchmarks of acceptable performance for methodological components and products.

DOI bib
Diagnostic Evaluation of Large‐Domain Hydrologic Models Calibrated Across the Contiguous United States
Oldřich Rakovec, Naoki Mizukami, Rohini Kumar, Andrew J. Newman, Stephan Thober, Andrew W. Wood, Martyn P. Clark, Luis Samaniego
Journal of Geophysical Research: Atmospheres, Volume 124, Issue 24

This study presents diagnostic evaluation of two large‐domain hydrologic models: the mesoscale Hydrologic Model (mHM) and the Variable Infiltration Capacity (VIC) over the contiguous United States (CONUS). These models have been calibrated using the Multiscale Parameter Regionalization scheme in a joint, multibasin approach using 492 medium‐sized basins across the CONUS yielding spatially distributed model parameter sets. The mHM simulations are used as a performance benchmark to examine performance deficiencies in the VIC model. We find that after calibration to streamflow, VIC generally overestimates the magnitude and temporal variability of evapotranspiration (ET) as compared to mHM as well as the FLUXNET observation‐based ET product, resulting in underestimation of the mean and variability of runoff. We perform a controlled calibration experiment to investigate the effect of varying number of transfer function parameters in mHM and to enable a fair comparison between both models (14 and 48 for mHM vs. 14 for VIC). Results of this experiment show similar behavior of mHM with 14 and 48 parameters. Furthermore, we diagnose the internal functioning of the VIC model by looking at the relationship of the evaporative fraction versus the degree of soil saturation and compare it with that of the mHM model, which has a different model structure, a prescribed nonlinear relationship between these variables and exhibits better model skill than VIC. Despite these limitations, the VIC‐based CONUS‐wide calibration constrained against streamflow exhibits better ET skill as compared to two preexisting independent VIC studies.