Tirthankar Roy


2023

DOI bib
Differentiable modelling to unify machine learning and physical models for geosciences
Chaopeng Shen, Alison P. Appling, Pierre Gentine, Toshiyuki Bandai, Hoshin Gupta, Alexandre M. Tartakovsky, Marco Baity‐Jesi, Fabrizio Fenicia, Daniel Kifer, Li Li, Xiaofeng Liu, Wei Ren, Yi Zheng, C. J. Harman, Martyn P. Clark, Matthew W. Farthing, Dapeng Feng, Kumar Prabhash, Doaa Aboelyazeed, Farshid Rahmani, Yalan Song, Hylke E. Beck, Tadd Bindas, Dipankar Dwivedi, Kuai Fang, Marvin Höge, Chris Rackauckas, Binayak P. Mohanty, Tirthankar Roy, Chonggang Xu, Kathryn Lawson
Nature Reviews Earth & Environment, Volume 4, Issue 8

Process-based modelling offers interpretability and physical consistency in many domains of geosciences but struggles to leverage large datasets efficiently. Machine-learning methods, especially deep networks, have strong predictive skills yet are unable to answer specific scientific questions. In this Perspective, we explore differentiable modelling as a pathway to dissolve the perceived barrier between process-based modelling and machine learning in the geosciences and demonstrate its potential with examples from hydrological modelling. ‘Differentiable’ refers to accurately and efficiently calculating gradients with respect to model variables or parameters, enabling the discovery of high-dimensional unknown relationships. Differentiable modelling involves connecting (flexible amounts of) prior physical knowledge to neural networks, pushing the boundary of physics-informed machine learning. It offers better interpretability, generalizability, and extrapolation capabilities than purely data-driven machine learning, achieving a similar level of accuracy while requiring less training data. Additionally, the performance and efficiency of differentiable models scale well with increasing data volumes. Under data-scarce scenarios, differentiable models have outperformed machine-learning models in producing short-term dynamics and decadal-scale trends owing to the imposed physical constraints. Differentiable modelling approaches are primed to enable geoscientists to ask questions, test hypotheses, and discover unrecognized physical relationships. Future work should address computational challenges, reduce uncertainty, and verify the physical significance of outputs. Differentiable modelling is an approach that flexibly integrates the learning capability of machine learning with the interpretability of process-based models. This Perspective highlights the potential of differentiable modelling to improve the representation of processes, parameter estimation, and predictive accuracy in the geosciences.

2022

DOI bib
The Great Lakes Runoff Intercomparison Project Phase 4: The Great Lakes (GRIP-GL)
Juliane Mai, Helen C. Shen, Bryan A. Tolson, Étienne Gaborit, Richard Arsenault, James R. Craig, Vincent Fortin, Lauren M. Fry, Martin Gauch, Daniel Klotz, Frederik Kratzert, Nicole O'Brien, Daniel Princz, Sinan Rasiya Koya, Tirthankar Roy, Frank Seglenieks, Narayan Kumar Shrestha, André Guy Tranquille Temgoua, Vincent Vionnet, Jonathan W. Waddell
Hydrology and Earth System Sciences

Abstract. Model intercomparison studies are carried out to test and compare the simulated outputs of various model setups over the same study domain. The Great Lakes region is such a domain of high public interest as it not only resembles a challenging region to model with its trans-boundary location, strong lake effects, and regions of strong human impact but is also one of the most densely populated areas in the United States and Canada. This study brought together a wide range of researchers setting up their models of choice in a highly standardized experimental setup using the same geophysical datasets, forcings, common routing product, and locations of performance evaluation across the 1 million square kilometer study domain. The study comprises 13 models covering a wide range of model types from Machine Learning based, basin-wise, subbasin-based, and gridded models that are either locally or globally calibrated or calibrated for one of each of six predefined regions of the watershed. Unlike most hydrologically focused model intercomparisons, this study not only compares models regarding their capability to simulated streamflow (Q) but also evaluates the quality of simulated actual evapotranspiration (AET), surface soil moisture (SSM), and snow water equivalent (SWE). The latter three outputs are compared against gridded reference datasets. The comparisons are performed in two ways: either by aggregating model outputs and the reference to basin-level or by regridding all model outputs to the reference grid and comparing the model simulations at each grid-cell. The main results of this study are: (1) The comparison of models regarding streamflow reveals the superior quality of the Machine Learning based model in all experiments performance; even for the most challenging spatio-temporal validation the ML model outperforms any other physically based model. (2) While the locally calibrated models lead to good performance in calibration and temporal validation (even outperforming several regionally calibrated models), they lose performance when they are transferred to locations the model has not been calibrated on. This is likely to be improved with more advanced strategies to transfer these models in space. (3) The regionally calibrated models – while losing less performance in spatial and spatio-temporal validation than locally calibrated models – exhibit low performances in highly regulated and urban areas as well as agricultural regions in the US. (4) Comparisons of additional model outputs (AET, SSM, SWE) against gridded reference datasets show that aggregating model outputs and the reference dataset to basin scale can lead to different conclusions than a comparison at the native grid scale. This is especially true for variables with large spatial variability such as SWE. (5) A multi-objective-based analysis of the model performances across all variables (Q, AET, SSM, SWE) reveals overall excellent performing locally calibrated models (i.e., HYMOD2-lumped) as well as regionally calibrated models (i.e., MESH-SVS-Raven and GEM-Hydro-Watroute) due to varying reasons. The Machine Learning based model was not included here as is not setup to simulate AET, SSM, and SWE. (6) All basin-aggregated model outputs and observations for the model variables evaluated in this study are available on an interactive website that enables users to visualize results and download data and model outputs.

DOI bib
The Great Lakes Runoff Intercomparison Project Phase 4: the Great Lakes (GRIP-GL)
Juliane Mai, Helen C. Shen, Bryan A. Tolson, Étienne Gaborit, Richard Arsenault, James R. Craig, Vincent Fortin, Lauren M. Fry, Martin Gauch, Daniel Klotz, Frederik Kratzert, Nicole O'Brien, Daniel Princz, Sinan Rasiya Koya, Tirthankar Roy, Frank Seglenieks, Narayan Kumar Shrestha, André Guy Tranquille Temgoua, Vincent Vionnet, Jonathan W. Waddell
Hydrology and Earth System Sciences, Volume 26, Issue 13

Abstract. Model intercomparison studies are carried out to test and compare the simulated outputs of various model setups over the same study domain. The Great Lakes region is such a domain of high public interest as it not only resembles a challenging region to model with its transboundary location, strong lake effects, and regions of strong human impact but is also one of the most densely populated areas in the USA and Canada. This study brought together a wide range of researchers setting up their models of choice in a highly standardized experimental setup using the same geophysical datasets, forcings, common routing product, and locations of performance evaluation across the 1×106 km2 study domain. The study comprises 13 models covering a wide range of model types from machine-learning-based, basin-wise, subbasin-based, and gridded models that are either locally or globally calibrated or calibrated for one of each of the six predefined regions of the watershed. Unlike most hydrologically focused model intercomparisons, this study not only compares models regarding their capability to simulate streamflow (Q) but also evaluates the quality of simulated actual evapotranspiration (AET), surface soil moisture (SSM), and snow water equivalent (SWE). The latter three outputs are compared against gridded reference datasets. The comparisons are performed in two ways – either by aggregating model outputs and the reference to basin level or by regridding all model outputs to the reference grid and comparing the model simulations at each grid-cell. The main results of this study are as follows: The comparison of models regarding streamflow reveals the superior quality of the machine-learning-based model in the performance of all experiments; even for the most challenging spatiotemporal validation, the machine learning (ML) model outperforms any other physically based model. While the locally calibrated models lead to good performance in calibration and temporal validation (even outperforming several regionally calibrated models), they lose performance when they are transferred to locations that the model has not been calibrated on. This is likely to be improved with more advanced strategies to transfer these models in space. The regionally calibrated models – while losing less performance in spatial and spatiotemporal validation than locally calibrated models – exhibit low performances in highly regulated and urban areas and agricultural regions in the USA. Comparisons of additional model outputs (AET, SSM, and SWE) against gridded reference datasets show that aggregating model outputs and the reference dataset to the basin scale can lead to different conclusions than a comparison at the native grid scale. The latter is deemed preferable, especially for variables with large spatial variability such as SWE. A multi-objective-based analysis of the model performances across all variables (Q, AET, SSM, and SWE) reveals overall well-performing locally calibrated models (i.e., HYMOD2-lumped) and regionally calibrated models (i.e., MESH-SVS-Raven and GEM-Hydro-Watroute) due to varying reasons. The machine-learning-based model was not included here as it is not set up to simulate AET, SSM, and SWE. All basin-aggregated model outputs and observations for the model variables evaluated in this study are available on an interactive website that enables users to visualize results and download the data and model outputs.

2021

DOI bib
Great Lakes Runoff Intercomparison Project Phase 3: Lake Erie (GRIP-E)
Juliane Mai, Bryan A. Tolson, Helen C. Shen, Étienne Gaborit, Vincent Fortin, Nicolas Gasset, Hervé Awoye, Tricia A. Stadnyk, Lauren M. Fry, Emily A. Bradley, Frank Seglenieks, André Guy Tranquille Temgoua, Daniel Princz, Shervan Gharari, Amin Haghnegahdar, Mohamed Elshamy, Saman Razavi, Martin Gauch, Jimmy Lin, Xiaojing Ni, Yongping Yuan, Meghan McLeod, N. B. Basu, Rohini Kumar, Oldřich Rakovec, Luis Samaniego, Sabine Attinger, Narayan Kumar Shrestha, Prasad Daggupati, Tirthankar Roy, Sungwook Wi, Timothy Hunter, James R. Craig, Alain Pietroniro
Journal of Hydrologic Engineering, Volume 26, Issue 9

AbstractHydrologic model intercomparison studies help to evaluate the agility of models to simulate variables such as streamflow, evaporation, and soil moisture. This study is the third in a sequen...

2019

DOI bib
Twenty-three unsolved problems in hydrology (UPH) – a community perspective
Günter Blöschl, M. F. Bierkens, António Chambel, Christophe Cudennec, Georgia Destouni, Aldo Fiori, J. W. Kirchner, Jeffrey J. McDonnell, H. H. G. Savenije, Murugesu Sivapalan, Christine Stumpp, Elena Toth, Elena Volpi, Gemma Carr, Claire Lupton, José Luis Salinas, Borbála Széles, Alberto Viglione, Hafzullah Aksoy, Scott T. Allen, Anam Amin, Vazken Andréassian, Berit Arheimer, Santosh Aryal, Victor R. Baker, Earl Bardsley, Marlies Barendrecht, Alena Bartošová, Okke Batelaan, Wouter Berghuijs, Keith Beven, Theresa Blume, Thom Bogaard, Pablo Borges de Amorim, Michael E. Böttcher, Gilles Boulet, Korbinian Breinl, Mitja Brilly, Luca Brocca, Wouter Buytaert, Attilio Castellarin, Andrea Castelletti, Xiaohong Chen, Yangbo Chen, Yuanfang Chen, Peter Chifflard, Pierluigi Claps, Martyn P. Clark, Adrian L. Collins, Barry Croke, Annette Dathe, Paula Cunha David, Felipe P. J. de Barros, Gerrit de Rooij, Giuliano Di Baldassarre, Jessica M. Driscoll, Doris Duethmann, Ravindra Dwivedi, Ebru Eriş, William Farmer, James Feiccabrino, Grant Ferguson, Ennio Ferrari, Stefano Ferraris, Benjamin Fersch, David C. Finger, Laura Foglia, Keirnan Fowler, Б. И. Гарцман, Simon Gascoin, Éric Gaumé, Alexander Gelfan, Josie Geris, Shervan Gharari, Tom Gleeson, Miriam Glendell, Alena Gonzalez Bevacqua, M. P. González‐Dugo, Salvatore Grimaldi, A.B. Gupta, Björn Guse, Dawei Han, David M. Hannah, A. A. Harpold, Stefan Haun, Kate Heal, Kay Helfricht, Mathew Herrnegger, Matthew R. Hipsey, Hana Hlaváčiková, Clara Hohmann, Ladislav Holko, C. Hopkinson, Markus Hrachowitz, Tissa H. Illangasekare, Azhar Inam, Camyla Innocente, Erkan Istanbulluoglu, Ben Jarihani, Zahra Kalantari, Andis Kalvāns, Sonu Khanal, Sina Khatami, Jens Kiesel, M. J. Kirkby, Wouter Knoben, Krzysztof Kochanek, Silvia Kohnová, Alla Kolechkina, Stefan Krause, David K. Kreamer, Heidi Kreibich, Harald Kunstmann, Holger Lange, Margarida L. R. Liberato, Eric Lindquist, Timothy E. Link, Junguo Liu, Daniel P. Loucks, Charles H. Luce, Gil Mahé, Olga Makarieva, Julien Malard, Shamshagul Mashtayeva, Shreedhar Maskey, Josep Mas‐Pla, Maria Mavrova-Guirguinova, Maurizio Mazzoleni, Sebastian H. Mernild, Bruce Misstear, Alberto Montanari, Hannes Müller-Thomy, Alireza Nabizadeh, Fernando Nardi, Christopher M. U. Neale, Nataliia Nesterova, Bakhram Nurtaev, V.O. Odongo, Subhabrata Panda, Saket Pande, Zhonghe Pang, Georgia Papacharalampous, Charles Perrin, Laurent Pfister, Rafael Pimentel, María José Polo, David Post, Cristina Prieto, Maria‐Helena Ramos, Maik Renner, José Eduardo Reynolds, Elena Ridolfi, Riccardo Rigon, Mònica Riva, David Robertson, Renzo Rosso, Tirthankar Roy, João Henrique Macedo Sá, Gianfausto Salvadori, Melody Sandells, Bettina Schaefli, Andreas Schumann, Anna Scolobig, Jan Seibert, Éric Servat, Mojtaba Shafiei, Ashish Sharma, Moussa Sidibé, Roy C. Sidle, Thomas Skaugen, Hugh G. Smith, Sabine M. Spiessl, Lina Stein, Ingelin Steinsland, Ulrich Strasser, Bob Su, Ján Szolgay, David G. Tarboton, Flavia Tauro, Guillaume Thirel, Fuqiang Tian, Rui Tong, Kamshat Tussupova, Hristos Tyralis, R. Uijlenhoet, Rens van Beek, Ruud van der Ent, Martine van der Ploeg, Anne F. Van Loon, Ilja van Meerveld, Ronald van Nooijen, Pieter van Oel, Jean‐Philippe Vidal, Jana von Freyberg, Sergiy Vorogushyn, Przemysław Wachniew, Andrew J. Wade, Philip J. Ward, Ida Westerberg, Christopher White, Eric F. Wood, Ross Woods, Zongxue Xu, Koray K. Yılmaz, Yongqiang Zhang
Hydrological Sciences Journal, Volume 64, Issue 10

This paper is the outcome of a community initiative to identify major unsolved scientific problems in hydrology motivated by a need for stronger harmonisation of research efforts. The procedure involved a public consultation through online media, followed by two workshops through which a large number of potential science questions were collated, prioritised, and synthesised. In spite of the diversity of the participants (230 scientists in total), the process revealed much about community priorities and the state of our science: a preference for continuity in research questions rather than radical departures or redirections from past and current work. Questions remain focused on the process-based understanding of hydrological variability and causality at all space and time scales. Increased attention to environmental change drives a new emphasis on understanding how change propagates across interfaces within the hydrological system and across disciplinary boundaries. In particular, the expansion of the human footprint raises a new set of questions related to human interactions with nature and water cycle feedbacks in the context of complex water management problems. We hope that this reflection and synthesis of the 23 unsolved problems in hydrology will help guide research efforts for some years to come.
Search
Co-authors
Venues