Abstract Excess nutrient inputs from agricultural and urban sources have accelerated eutrophication and increased the incidence of algal blooms in the Great Lakes Basin (GLB). Lake basin management to address these threats relies on understanding the key drivers of pollution. Here, we use a random forest machine learning model to leverage information from 202 monitored streams in the GLB to predict seasonal and annual flow‐weighted concentrations of nitrogen and phosphorus, as well as nutrient ratios across the GLB. Land use (agricultural and urban land) and land management (tile drainage and wetland density) emerge as the two most important predictors for dissolved inorganic nitrogen (DIN; NO 3 − + NO 2 − ) and soluble reactive phosphorus (SRP; PO 4 3 ), while soil type and wetland density are more important for particulate P (PP). Partial dependence plots demonstrate increasing nutrient concentrations with increasing tile density and decreasing wetland density. In addition, increasing tile and livestock densities and decreasing forest cover correspond to higher SRP:Total Phosphorus (TP) ratios. Seasonally, the highest proportions of SRP occur in summer and fall. Higher livestock densities are also correlated with increasing N:P (DIN:TP) ratios. Livestock operations can contribute to the buildup of soil nutrients from excess manure application, while increasing subsurface drainage can provide transport pathways for dissolved nutrients. Given that both SRP:TP and the N:P ratios are strong predictors of harmful algal blooms, our study highlights the importance of livestock management, drainage management, and wetland restoration in efforts to address eutrophication in intensively managed landscapes.
Abstract Excess nitrogen from intensive agricultural production, atmospheric N deposition, and urban point sources elevates stream nitrate concentrations, leading to problems of eutrophication and ecosystem degradation in coastal waters. A major emphasis of current US‐scale analysis of water quality is to better our understanding of the relationship between changes in anthropogenic N inputs within watersheds and subsequent changes in riverine N loads. While most water quality modeling assumes a positive linear correlation between watershed N inputs and riverine N, many efforts to reduce riverine N through improved nutrient management practices result in little or no short‐term improvements in water quality. Here, we use nitrate concentration and load data from 478 US watersheds, along with developed N input trajectories for these watersheds, to quantify time‐varying relationships between N inputs and riverine N export. Our results show substantial variations in watershed N import‐export relationships over time, with quantifiable hysteresis effects. Our results show that more population‐dense urban watersheds in the northeastern U.S. more frequently show clockwise hysteresis relationships between N imports and riverine N export, with accelerated improvements in water quality being achieved through the implementation of point‐source controls. In contrast, counterclockwise hysteresis dynamics are more common in agricultural watersheds, where time lags occur between the implementation of nutrient management practices and water‐quality improvements. Finally, we find higher tile‐drainage densities to be associated with more linear relationships between N inputs and riverine N. The empirical analysis in this study is bolstered by modeled simulations to reproduce and further explain drivers behind the hysteretic relationships commonly observed in the monitored watersheds.
Managing nitrogen legacies to accelerate water quality improvement
Nandita B. Basu,
K. J. Van Meter,
Philippe Van Cappellen,
Brian H. Jacobsen,
David L. Rudolph,
Maria da Conceição Cunha,
Søren Bøye Olsen
Nature Geoscience, Volume 15, Issue 2
Increasing incidences of eutrophication and groundwater quality impairment from agricultural nitrogen pollution are threatening humans and ecosystem health. Minimal improvements in water quality have been achieved despite billions of dollars invested in conservation measures worldwide. Such apparent failures can be attributed in part to legacy nitrogen that has accumulated over decades of agricultural intensification and that can lead to time lags in water quality improvement. Here, we identify the key knowledge gaps related to landscape nitrogen legacies and propose approaches to manage and improve water quality, given the presence of these legacies.
Abstract Historic land alterations and agricultural intensification have resulted in legacy phosphorus (P) accumulations within lakes and reservoirs. Internal loading from such legacy stores can be a major driver of future water quality degradation. Yet, little is known about the magnitude and spatial patterns of legacy P accumulation in lentic systems, and how watershed disturbance trajectories drive these patterns. Here, we used a meta-analysis of 113 paleolimnological studies across 124 lakes and four reservoirs (referred here on as lakes) in 20 countries to quantify the linkages between the 100 year trajectories of P concentrations in lake sediments, watershed inputs, and lake morphology. We find five distinct clusters for lake sediment P trajectories, with lakes in the developing and developed world showing distinctly different patterns. Lakes in the developed world (Europe and North America) with early agricultural intensification had the highest sediment P concentrations (1176–1628 mg kg −1 ), with a peak between the 1970–1980s and a decline since then, while lakes in the developing world, specifically China, documented monotonically increasing sediment P concentrations (857–1603 mg kg −1 ). Sediment P trajectories reflected watershed disturbance patterns and were driven by a combination of anthropogenic drivers (fertilizer input and population density) and lake morphology (watershed to lake area ratio). Specifically, we found the largest legacy accumulation rates to occur in shallow lakes experiencing long-term land-use disturbances. These links between land-use change and P accumulation in lentic systems can provide insights about inland water quality response and help to develop robust predictive models useful for resource managers and decision-makers.
Synthesis of science: findings on Canadian Prairie wetland drainage
Helen M. Baulch,
Colin J. Whitfield,
Jared D. Wolfe,
Nandita B. Basu,
Robert G. Clark,
A. M. Ireson,
John W. Pomeroy,
Canadian Water Resources Journal / Revue canadienne des ressources hydriques, Volume 46, Issue 4
Extensive wetland drainage has occurred across the Canadian Prairies, and drainage activities are ongoing in many areas (Dahl 1990; Watmough and Schmoll 2007; Bartzen et al. 2010; Dahl 2014; Prairi...
Quantifying the degradation of micropollutants in streams is important for river‐water quality management. While biodegradation is believed to be enhanced in transient‐storage zones of rivers, it can also occur in the main channel. Photodegradation is restricted to the main channel and surface transient‐storage zones. In this study, we propose a transient‐storage model framework to address the transport and fate of micropollutants in different domains of a river. We fitted the model to nighttime and daytime measurements of a tracer and four pharmaceuticals in River Steinlach, Germany. We could separate the surface and subsurface fractions of the total transient‐storage zone by fitting fluorescein photodegradation at daytime versus conservative nighttime transport. In reactive transport, we tested two model variants, allowing biodegradation in the main channel or restricting it to the transient‐storage zones, obtaining similar model performances but different degradation rate coefficients. Carbamazepine is relatively conservative; photodegradation of metoprolol and venlafaxine can be quantitatively attributed to the main channel and surface transient‐storage zone; metoprolol, venlafaxine, and sulfamethoxazole undergo biodegradation. We projected a decrease of overall pollutant removal under higher flow conditions, regardless of attributing biodegradation to specific river compartments. Our study indicates that model‐based analysis of daytime and nighttime field experiments allows (1) distinguishing photodegradation and biodegradation, (2) reducing equifinality of surface and subsurface transient‐storage, and (3) estimating biodegradation in different domains under different assumptions. However, entirely reducing the equifinality of attributing biodegradation to different compartments is hardly possible in lowland rivers with only limited transient storage.
As urban droughts make headlines across the globe, it is increasingly relevant to critically evaluate the long‐term sustainability of both water supply and demand in the world's cities. This is the case even in water‐rich regions, where upward swings in water demands during periods of hot, dry weather can aggravate already strained water supplies and increase cities' vulnerability to water shortage. Summer spikes in water demand have motivated several cities to impose permanent restrictions on outdoor water uses; however, little is yet known about their effectiveness. This paper examines daily water production data from 15 Canadian cities to (1) quantify how overall and seasonal demands are evolving over time across humid and semiarid settings and (2) determine whether permanent water use restrictions have been effective in curbing summer water demands both seasonally and during specific hot and dry periods. Results show that while per‐capita water demand is declining in all cities studied, the seasonal distribution of that demand has remained largely stable in all but a few cases. While average demands in the summer months remain largely unaffected by the imposition of permanent restrictions, cities that enforce stringent limits on outdoor water use have seen a reduction in the variability of daily demands and a decline in peak demands following their implementation. During short‐term periods of exceptionally hot and dry weather when vulnerability to water shortage is most acute, cities with strict restrictions also see smaller surges in demand than those with weaker or no restrictions in place.
The Upper Mississippi River Basin is the largest source of reactive nitrogen (N) to the Gulf of Mexico. Concentration‐discharge (C‐Q) relationships offer a means to understand both the terrestrial sources that generate this reactive N and the in‐stream processes that transform it. Progress has been made on identifying land use controls on C‐Q dynamics. However, the impact of basin size and river network structure on C‐Q relationships is not well characterized. Here, we show, using high‐resolution nitrate concentration data, that tile drainage is a dominant control on C‐Q dynamics, with increasing drainage density contributing to more chemostatic C‐Q behavior. We further find that concentration variability increases, relative to discharge variability, with increasing basin size across six orders of magnitude, and this pattern is attributed to different spatial correlation structures for C and Q. Our results show how land use and river network structure jointly control riverine N export.
Reactive nitrogen (N) fluxes have increased tenfold over the last century, driven by increases in population, shifting diets, and increased use of commercial N fertilizers. Runoff of excess N from intensively managed landscapes threatens drinking water quality and disrupts aquatic ecosystems. Excess N is also a major source of greenhouse gas emissions from agricultural soils. While N emissions from agricultural landscapes are known to originate from not only current‐year N input but also legacy N accumulation in soils and groundwater, there has been limited access to fine‐scale, long‐term data regarding N inputs and outputs over decades of intensive agricultural land use. In the present work, we synthesize population, agricultural, and atmospheric deposition data to develop a comprehensive, 88‐year (1930–2017) data set of county‐scale components of the N mass balance across the contiguous United States (Trajectories Nutrient Dataset for nitrogen [TREND‐nitrogen]). Using a machine‐learning algorithm, we also develop spatially explicit typologies for components of the N mass balance. Our results indicate a large range of N trajectory behaviors across the United States due to differences in land use and management and particularly due to the very different drivers of N dynamics in densely populated urban areas compared with intensively managed agricultural zones. Our analysis of N trajectories also demonstrates a widespread functional homogenization of agricultural landscapes. This newly developed typology of N trajectories improves our understanding of long‐term N dynamics, and the underlying data set provides a powerful tool for modeling the impacts of legacy N on past, present, and future water quality.
Growing populations and agricultural intensification have led to raised riverine nitrogen (N) loads, widespread oxygen depletion in coastal zones (coastal hypoxia)1 and increases in the incidence of algal blooms.Although recent work has suggested that individual wetlands have the potential to improve water quality2,3,4,5,6,7,8,9, little is known about the current magnitude of wetland N removal at the landscape scale. Here we use National Wetland Inventory data and 5-kilometre grid-scale estimates of N inputs and outputs to demonstrate that current N removal by US wetlands (about 860 ± 160 kilotonnes of nitrogen per year) is limited by a spatial disconnect between high-density wetland areas and N hotspots. Our model simulations suggest that a spatially targeted increase in US wetland area by 10 per cent (5.1 million hectares) would double wetland N removal. This increase would provide an estimated 54 per cent decrease in N loading in nitrate-affected watersheds such as the Mississippi River Basin. The costs of this increase in area would be approximately 3.3 billion US dollars annually across the USA—nearly twice the cost of wetland restoration on non-agricultural, undeveloped land—but would provide approximately 40 times more N removal. These results suggest that water quality improvements, as well as other types of ecosystem services such as flood control and fish and wildlife habitat, should be considered when creating policy regarding wetland restoration and protection.
Reported groundwater recovery in South India has been attributed to both increasing rainfall and political interventions. Findings of increasing groundwater levels, however, are at odds with reports of well failure and decreases in the land area irrigated from shallow wells. We argue that recently reported results are skewed by the problem of survivor bias, with dry or defunct wells being systematically excluded from trend analyses due to missing data. We hypothesize that these dry wells carry critical information about groundwater stress that is missed when data are filtered. Indeed, we find strong correlations between missing well data and metrics related to climate stress and groundwater development, indicative of a systemic bias. Using two alternative metrics, which take into account information from dry and defunct wells, our results demonstrate increasing groundwater stress in South India. Our refined approach for identifying groundwater depletion hot spots is critical for policy interventions and resource allocation.
Land use change and agricultural intensification have increased food production but at the cost of polluting surface and groundwater. Best management practices implemented to improve water quality have met with limited success. Such lack of success is increasingly attributed to legacy nutrient stores in the subsurface that may act as sources after reduction of external inputs. However, current water‐quality models lack a framework to capture these legacy effects. Here we have modified the SWAT (Soil Water Assessment Tool) model to capture the effects of nitrogen (N) legacies on water quality under multiple land‐management scenarios. Our new SWAT‐LAG model includes (1) a modified carbon‐nitrogen cycling module to capture the dynamics of soil N accumulation, and (2) a groundwater travel time distribution module to capture a range of subsurface travel times. Using a 502‐km2 Iowa watershed as a case study, we found that between 1950 and 2016, 25% of the total watershed N surplus (N Deposition + Fertilizer + Manure + N Fixation − Crop N uptake) had accumulated within the root zone, 14% had accumulated in groundwater, while 27% was lost as riverine output, and 34% was denitrified. In future scenarios, a 100% reduction in fertilizer application led to a 79% reduction in stream N load, but the SWAT‐LAG results suggest that it would take 84 years to achieve this reduction, in contrast to the 2 years predicted in the original SWAT model. The framework proposed here constitutes a first step toward modifying a widely used modeling approach to assess the effects of legacy N on the time required to achieve water‐quality goals.
Hydrologic models partition flows into surface and subsurface pathways, but their calibration is typically conducted only against streamflow. Here we argue that unless model outcomes are constrained using flow pathway data, multiple partitioning schemes can lead to the same streamflow. This point becomes critical for biogeochemical modeling as individual flow paths may yield unique chemical signatures. We show how information on flow pathways can be used to constrain hydrologic flow partitioning and how improved partitioning can lead to better water quality predictions. As a case study, an agricultural basin in Ontario is used to demonstrate that using tile discharge data could increase the performance of both the hydrology and the nitrogen transport models. Watershed‐scale tile discharge was estimated based on sparse tile data collected at some tiles using a novel regression‐based approach. Through a series of calibration experiments, we show that utilizing tile flow signatures as calibration criteria improves model performance in the prediction of nitrate loads in both the calibration and validation periods. Predictability of nitrate loads is improved even with no tile flow data and by model calibration only against an approximate understanding of annual tile flow percent. However, despite high values of goodness‐of‐fit metrics in this case, temporal dynamics of predictions are inconsistent with reality. For instance, the model predicts significant tile discharge in summer with no tile flow occurrence in the field. Hence, the proposed tile flow upscaling approach and the partitioning‐constrained model calibration are vital steps toward improving the predictability of biogeochemical models in tiled landscapes.
Ballard et al . argue that our prediction of a 30-year or longer recovery time for Gulf of Mexico water quality is highly uncertain, and that much shorter time lags are equally likely. We demonstrate that their argument, based on the use of a two-component regression model, does not sufficiently consider fundamental watershed processes or multiple lines of evidence suggesting the existence of decadal-scale lags.
Haunted by the past Reducing the extent of hypoxia in the Gulf of Mexico will not be as easy as reducing agricultural nitrogen use. Van Meter et al. report that so much nitrogen from runoff has accumulated in the Mississippi River basin that, even if future agricultural nitrogen inputs are eliminated, it will still take 30 years to realize the 60% decrease in load needed to reduce eutrophication in the Gulf. This legacy effect means that a dramatic shift in land-use practices, which may not be compatible with current levels of agricultural production, will be needed to control hypoxia in the Gulf of Mexico. Science , this issue p. 427
© American Geophysical Union: Shafii, M., Basu, N., Craig, J. R., Schiff, S. L., & Van Cappellen, P. (2017). A diagnostic approach to constraining flow partitioning in hydrologic models using a multiobjective optimization framework. Water Resources Research, 53(4), 3279–3301. https://doi.org/10.1002/2016WR019736