Rohini Kumar


2021

DOI bib
Great Lakes Runoff Intercomparison Project Phase 3: Lake Erie (GRIP-E)
Juliane Mai, Bryan A. Tolson, Hongren Shen, Étienne Gaborit, Vincent Fortin, Nicolas Gasset, Hervé Awoye, Tricia A. Stadnyk, Lauren M. Fry, Emily A. Bradley, Frank Seglenieks, André Guy Tranquille Temgoua, Daniel Princz, Shervan Gharari, Amin Haghnegahdar, Mohamed Elshamy, Saman Razavi, Martin Gauch, Jimmy Lin, Xiaojing Ni, Yongping Yuan, Meghan McLeod, N. B. Basu, Rohini Kumar, Oldřich Rakovec, Luis Samaniego, Sabine Attinger, Narayan Kumar Shrestha, Prasad Daggupati, Tirthankar Roy, Sungwook Wi, Timothy Hunter, James R. Craig, Alain Pietroniro, Juliane Mai, Bryan A. Tolson, Hongren Shen, Étienne Gaborit, Vincent Fortin, Nicolas Gasset, Hervé Awoye, Tricia A. Stadnyk, Lauren M. Fry, Emily A. Bradley, Frank Seglenieks, André Guy Tranquille Temgoua, Daniel Princz, Shervan Gharari, Amin Haghnegahdar, Mohamed Elshamy, Saman Razavi, Martin Gauch, Jimmy Lin, Xiaojing Ni, Yongping Yuan, Meghan McLeod, N. B. Basu, Rohini Kumar, Oldřich Rakovec, Luis Samaniego, Sabine Attinger, Narayan Kumar Shrestha, Prasad Daggupati, Tirthankar Roy, Sungwook Wi, Timothy Hunter, James R. Craig, Alain Pietroniro
Journal of Hydrologic Engineering, Volume 26, Issue 9

AbstractHydrologic model intercomparison studies help to evaluate the agility of models to simulate variables such as streamflow, evaporation, and soil moisture. This study is the third in a sequen...

DOI bib
Great Lakes Runoff Intercomparison Project Phase 3: Lake Erie (GRIP-E)
Juliane Mai, Bryan A. Tolson, Hongren Shen, Étienne Gaborit, Vincent Fortin, Nicolas Gasset, Hervé Awoye, Tricia A. Stadnyk, Lauren M. Fry, Emily A. Bradley, Frank Seglenieks, André Guy Tranquille Temgoua, Daniel Princz, Shervan Gharari, Amin Haghnegahdar, Mohamed Elshamy, Saman Razavi, Martin Gauch, Jimmy Lin, Xiaojing Ni, Yongping Yuan, Meghan McLeod, N. B. Basu, Rohini Kumar, Oldřich Rakovec, Luis Samaniego, Sabine Attinger, Narayan Kumar Shrestha, Prasad Daggupati, Tirthankar Roy, Sungwook Wi, Timothy Hunter, James R. Craig, Alain Pietroniro, Juliane Mai, Bryan A. Tolson, Hongren Shen, Étienne Gaborit, Vincent Fortin, Nicolas Gasset, Hervé Awoye, Tricia A. Stadnyk, Lauren M. Fry, Emily A. Bradley, Frank Seglenieks, André Guy Tranquille Temgoua, Daniel Princz, Shervan Gharari, Amin Haghnegahdar, Mohamed Elshamy, Saman Razavi, Martin Gauch, Jimmy Lin, Xiaojing Ni, Yongping Yuan, Meghan McLeod, N. B. Basu, Rohini Kumar, Oldřich Rakovec, Luis Samaniego, Sabine Attinger, Narayan Kumar Shrestha, Prasad Daggupati, Tirthankar Roy, Sungwook Wi, Timothy Hunter, James R. Craig, Alain Pietroniro
Journal of Hydrologic Engineering, Volume 26, Issue 9

AbstractHydrologic model intercomparison studies help to evaluate the agility of models to simulate variables such as streamflow, evaporation, and soil moisture. This study is the third in a sequen...

2019

DOI bib
On the choice of calibration metrics for “high-flow” estimation using hydrologic models
Naoki Mizukami, Oldřich Rakovec, Andrew J. Newman, Martyn Clark, Andrew W. Wood, Hoshin V. Gupta, Rohini Kumar
Hydrology and Earth System Sciences, Volume 23, Issue 6

Abstract. Calibration is an essential step for improving the accuracy of simulations generated using hydrologic models. A key modeling decision is selecting the performance metric to be optimized. It has been common to use squared error performance metrics, or normalized variants such as Nash–Sutcliffe efficiency (NSE), based on the idea that their squared-error nature will emphasize the estimates of high flows. However, we conclude that NSE-based model calibrations actually result in poor reproduction of high-flow events, such as the annual peak flows that are used for flood frequency estimation. Using three different types of performance metrics, we calibrate two hydrological models at a daily step, the Variable Infiltration Capacity (VIC) model and the mesoscale Hydrologic Model (mHM), and evaluate their ability to simulate high-flow events for 492 basins throughout the contiguous United States. The metrics investigated are (1) NSE, (2) Kling–Gupta efficiency (KGE) and its variants, and (3) annual peak flow bias (APFB), where the latter is an application-specific metric that focuses on annual peak flows. As expected, the APFB metric produces the best annual peak flow estimates; however, performance on other high-flow-related metrics is poor. In contrast, the use of NSE results in annual peak flow estimates that are more than 20 % worse, primarily due to the tendency of NSE to underestimate observed flow variability. On the other hand, the use of KGE results in annual peak flow estimates that are better than from NSE, owing to improved flow time series metrics (mean and variance), with only a slight degradation in performance with respect to other related metrics, particularly when a non-standard weighting of the components of KGE is used. Stochastically generated ensemble simulations based on model residuals show the ability to improve the high-flow metrics, regardless of the deterministic performances. However, we emphasize that improving the fidelity of streamflow dynamics from deterministically calibrated models is still important, as it may improve high-flow metrics (for the right reasons). Overall, this work highlights the need for a deeper understanding of performance metric behavior and design in relation to the desired goals of model calibration.

DOI bib
Diagnostic Evaluation of Large‐Domain Hydrologic Models Calibrated Across the Contiguous United States
Oldřich Rakovec, Naoki Mizukami, Rohini Kumar, Andrew J. Newman, Stephan Thober, Andrew W. Wood, Martyn Clark, Luis Samaniego
Journal of Geophysical Research: Atmospheres, Volume 124, Issue 24

This study presents diagnostic evaluation of two large‐domain hydrologic models: the mesoscale Hydrologic Model (mHM) and the Variable Infiltration Capacity (VIC) over the contiguous United States (CONUS). These models have been calibrated using the Multiscale Parameter Regionalization scheme in a joint, multibasin approach using 492 medium‐sized basins across the CONUS yielding spatially distributed model parameter sets. The mHM simulations are used as a performance benchmark to examine performance deficiencies in the VIC model. We find that after calibration to streamflow, VIC generally overestimates the magnitude and temporal variability of evapotranspiration (ET) as compared to mHM as well as the FLUXNET observation‐based ET product, resulting in underestimation of the mean and variability of runoff. We perform a controlled calibration experiment to investigate the effect of varying number of transfer function parameters in mHM and to enable a fair comparison between both models (14 and 48 for mHM vs. 14 for VIC). Results of this experiment show similar behavior of mHM with 14 and 48 parameters. Furthermore, we diagnose the internal functioning of the VIC model by looking at the relationship of the evaporative fraction versus the degree of soil saturation and compare it with that of the mHM model, which has a different model structure, a prescribed nonlinear relationship between these variables and exhibits better model skill than VIC. Despite these limitations, the VIC‐based CONUS‐wide calibration constrained against streamflow exhibits better ET skill as compared to two preexisting independent VIC studies.