Ensemble Prediction System
An Ensemble Prediction System is a collection of two or more forecasts over the same future horizon. Ensemble Prediction Systems represent the probabilities and therefore uncertainties associated with a forecast. The main purpose of developing ESPs, in contrast to deterministic forecasts, was to assess and communicate the inherent uncertainty of the forecasts in an envelope (Zappa et al. 2018). The first probabilistic studies were in the domain of weather forecasting, and date back to the sixties when Lorenz research demonstrated the chaotic nature of the fluid dynamics equations in weather forecasting (Lorenz 1965). In several papers, Lorenz (Lorenz 1963; Lorenz 1965; Lorenz 1969) investigated the predictability of the atmosphere and weather patterns and showed that the fundamental limit of atmospheric predictability is related to the initial conditions. Following the work of Lorenz, the 1970s was a decade of initiation and development of probabilistic approaches in weather forecasting. Epstein (1969) and Gleeson (1970) all proposed probabilistic approaches in forecasting.
However, due to the complexity of these techniques and lack of enough computing power, they couldn’t generate operational weather ensemble forecasts. The first official ensemble prediction systems date back to the 1980s. Hoffman and Kalnay (1983) computed the first simple ensemble weather prediction system with a technique called the Lagged Average Forecast method. In 1992, the US National Centers for Environmental Prediction computed a first operational ensemble weather forecast consisting of a 14-member ensemble forecast (Toth and Kalnay 1993). In the same year, European Center for Medium Range Weather Forecast (ECMWF) started generating and using ensemble weather forecasts (Molteni et al. 1996). As the computational power increased, the number and length of forecasts as well as complexity and resolution of the generating model increased, and by 1997, NCEP (National Centers for Environmental Prediction) and ECMWF (European Centre for Medium-Range Weather Forecasts) were able to compute global ensemble weather forecasts where each member was generated from different initial conditions.
They demonstrated the usefulness of ensemble weather forecasts. Following the success of these two centers in applying ensemble weather forecasting, the use of ensembles in weather forecasting became prevalent around the world. By the late 1990s, the US navy and the meteorological service of Canada, Japan, South Africa, Australia and India were all using ensemble forecasts (Sivillo et al. 1997). Ensemble weather forecasts contain important information about the uncertainties. Following the advances of ensemble prediction systems by the meteorological community, hydrologists started to apply ensemble prediction in hydrologic studies and especially in streamflow forecasting (Cloke and Pappenberger 2009). Generating probabilistic forecasts in hydrology as well as assessing the skill of those forecasts is a complex task, especially in the context of streamflow forecasting. In practice, deterministic forecasts provide a single streamflow value per time step without giving any information about the uncertainty associated to the forecast. In many cases, assessing and communicating the uncertainty in order to make the best decision is essential. Hence, probabilistic approaches were proposed in order to overcome this important drawback of deterministic approaches (Krzysztofowicz 2001).
The first studies on ensemble prediction systems for streamflow forecasting began in the early 1970s and followed the success obtained in ensemble weather forecasting. The National Weather Service (NWS) applied the concept in the 1975 Extended Streamflow Program (Curtis and Schaake 1979; Twedt et al. 1977). As a result of the usefulness of the Extended Streamflow Program, the National Weather Service (NWS) redesigned the Program in 1979 to eliminate deficiencies and officially introduced ensemble streamflow prediction in 1984 (Day 1985). The potential of ESP (Ensemble System Prediction) in water supply management was examined by Day in a 1985 study. He generated ESP by coupling past observed weather data to a hydrologic model. This study demonstrated that ESPs can be used for water supply management in the form of inflow hydrographs forecasts for reservoir operation as well as to forecast maximum and minimum streamflow. Likewise, Georgakakos (1989) demonstrated that ESP can significantly improve the planning of reservoir operation; however, it is system specific. Following the demonstration of the added value of using ESP, the Natural Resources Conservation Service (NRCS) implemented ESP in their forecasting system and, since 2000, the use of ESPs in hydrological forecasting became more prevalent by international bodies such as the European Commission joint Research Center and the World Meteorological Organization (WMO) (Cloke and Pappenberger 2009). Nowadays, the use of ESP in hydrological forecasting is prevalent and ESP can be used for various purposes such as flash flood forecasting (Alfieri and Thielen 2015), flood forecasting (Mueller et al. 2016; Schumann et al. 2013) and hydropower generation (Fan et al. 2016; Schwanenberg et al. 2015).
Resampling Methods
Resampling is a non-parametric method which consists of using past meterorological time series as possible representations of the climate over the forecasting period. Resampling is present under many variants depending on how to draw samples from the entire pool of past time series. Resampling methods consist of several variants such as the persistence, trends, climatology and analog methods. The persistence method is the simplest weather forecasting approach. It assumes that the conditions prevailing at the forecast time will not change and that weather patterns will change very slowly. The trend method is based on the determination of the speed and direction of fronts, pressure centers, areas of clouds and precipitation. Trend methods are suitable for weather systems which move at constant speed and direction over a long period. The climatology method is based on representing the past uncertinty of weather statistics accumulated over several years. The method assumes that the probability of any weather pattern is the same as it was in the past record.
The analog method involves estimating today’s weather according to similar weather conditions observed in the past. All of the above methods have limitations. Weather systems are very dynamic and it is sometimes difficult to find a perfect analog. All resampling methods are based on using historical weather data and suffer for the same drawbacks. The main weaknesses of resampling methods can be summarized into two main points: First, since resampling methods are based on the historical record, any deficiency in past data will be represented in the quality of the ensuing weather forecast. The forecast horizon is also limited by the length of the existing records. Finally, since resampling methods are based on past data, they cannot take into account non-stationarity in the climatic record, such as induced by anthropogenic climate change. Therefore, trying to forecast the future only using past data is a very difficult task in non-stationary conditions. As a result, it can be implied that the raw forecasts based on resampling methods are always biased (Lall and Sharma 1996; Moniz et al. 2017).
Stochastic Weather Generators
The concept of weather generators dates back to the 1800s. The earliest work of Quetelet in 1852 was related to the development of probabilistic modeling of precipitation occurrence and based on the concept of wet and dry day persistence. From 1916 to 1938 it was demonstrated that the probability of a rainy day is greater if it is preceded by a wet day. Longley (1953) modeled dry and wet spells by using geometric series and Gabriel and Neuman (1962) used a Markov Chain to reproduce the distribution of wet and dry spell lengths and therefore presented the first statistical model of daily rainfall occurrence. Tordorvic and Woolhiser (1975) combined the Markov Chain occurrence model with an exponential distribution to generate rainfall amounts. Finally, Richardson (1981) combined the above and built a stochastic weather generator able to generate precipitation, maximum and minimum temperature as well as solar radiation conditioned on the previous day wet/dry state.
Wilks (1992) adapted a stochastic weather generator as a downscaling method in climate change studies and the results showed that weather generators could be used as a downscaling tool suitable to investigate the impacts of climate change. Wilks (1999a) used a simple stochastic weather generator to downscale and disaggregate precipitation, and showed that it could be readily used for simulating climate change scenarios at the local scale. In the same year, Corte- Real (1999) applied a stochastic weather generator as a downscaling tool conditioned on daily circulation patterns in southern Portugal. He showed that the weather generator could reproduce observed weather statistics very well. It could therefore produce reliable climate change scenarios, provided that that the present-time relationship between local precipitation and large scale atmospheric circulation remains valid in the future. Wilks and Wilby (1999) produced a complete review of the development of stochastic weather generators. They described common applications of weather generators, discussed the main deficiencies and suggested solutions to overcome them.
INTRODUCTION |