Projected climate change effects on groundwater and stream discharges were investigated through simulations with a distributed, physically based, surface water–groundwater model. Input to the hydrological model includes precipitation, reference evapotranspiration, and temperature data of the HIRHAM4 regional climate model (RCM). The aim of this study was to determine whether the choice of bias-correction method, applied to the RCM data, affected the projected hydrological changes. One method consisted of perturbation of observed data (POD) using climate change signals derived from the RCM output, while the other consisted of distribution-based scaling (DBS) of the RCM output. Distribution-based scaling resulted in RCM control period data closely approaching the observed climate data and thereby considerably improved the simulation of recharge and stream discharges. When comparing the simulations using both methods, only small differences between the projected changes in hydrological variables for the scenario period were found. Mean annual recharge increased by 15% for the DBS method and 12% for POD, and drain flow increased by 24 and 19%, respectively, while the increases in base flow were similar (7%). For both methods, daily stream discharges up to and including the median showed little change, while increases occurred for the higher quantile values. This study showed that the choice of bias-correction method did not have a significant influence on the projected changes of mean hydrological responses in this catchment, although further analysis is necessary to determine whether extremes are affected. Furthermore, the characteristics of the hydrological system likely reduced the sensitivity of the projected changes to the choice of method.

Two methods to perturb daily precipitation, temperature, and evapotranspiration data from a regional climate model for use in hydrological simulations of climate change are compared. Simulated groundwater recharge and stream discharge from the methods differed only slightly, showing that climate data can be used directly in hydrological models.

Regional climate models downscale climate scenario experiments generated by global climate models (GCMs) through an improved representation of sub-grid scale characteristics such as land–sea contrast, land cover, and topography and the simulation of local-scale atmospheric processes. Currently, most RCMs have a horizontal resolution of 50 km, but some experiments at 25 and 12 km are also available (Christensen and Christensen, 2007). Due to increasing computing power, it can be expected that more RCM experiments at high resolution will become available, making it attractive to use RCM data directly in hydrological impact studies. Unfortunately, RCMs are still subjected to considerable biases when comparing the simulated control climate with observations. Therefore, the method does not imply an absolutely direct use of RCM data in hydrological models because it is necessary to bias correct RCM data before use in hydrological models.

A commonly applied method to cope with these biases is the delta change method (Hay et al., 2000), which in short consists of perturbing a present-day climate series with absolute or relative change factors, derived from the comparison of RCM data for the current climate and a projected scenario climate. In most cases the present-day climate series consists of observations (e.g., Graham et al., 2007; van Roosmalen et al., 2007, 2009), although in some studies the RCM control period output is used after applying bias correction (e.g., Lenderink et al., 2007). In this study, the delta change method is applied to an observed data set and is therefore referred to as perturbation of observed data (POD). The delta change method only takes simple changes in the probability density function (PDF) into account, while changes in the number of days of precipitation and potential changes in the correlation between the different variables are not considered (Lenderink et al., 2007).

However, it is appealing to use a bias-correction method that preserves the dynamic characteristics of the RCM scenario data. This is achieved if RCM scenario results are used as a baseline for estimating the future climate. The bias correction or scaling is based on statistical relationships between observations and control period RCM data, often at the grid-cell scale, and the same relationships are used to correct RCM scenario data. In the standard application of the scaling method, bias-correction values are calculated using the absolute or relative difference between the mean value of the meteorological variable for observations and the RCM control period data (e.g., Ekström et al., 2007; Fowler and Kilsby, 2007; Graham et al., 2007; Lenderink et al., 2007; Thodsen, 2007). The bias-correction values are often calculated on a monthly scale, but also annual and 10-d corrections have been used.

Lenderink et al. (2007) compared discharges in the river Rhine using a hydrological model forced by precipitation (P) and temperature (T) data from (i) the RCM scenario simulation and (ii) delta-change perturbed RCM control period output, after applying the same bias correction to both the control and scenario data. A similar response in the mean summer and winter discharge was found for both methods, but the predictions of extreme flows in winter differed significantly. Thodsen (2007) investigated the effect of climate change on river discharges in Denmark using a rainfall–runoff model. The RCM data were used directly after bias correction with monthly correction factors. Changes in the mean seasonal discharges were similar to results found by van Roosmalen et al. (2007) using output from the same RCM, but applying the POD method. Thodsen (2007) found increases in daily extreme P amounts in the scenario simulations, but the resulting increases in extreme discharges varied, depending on whether clayey or sandy soils dominated in the catchments.

Graham et al. (2007) used POD and bias-corrected GCM and RCM data directly in a hydrological model for the Lule River basin in Sweden. When using GCM data, monthly rather than annual bias correction improved the simulation of river discharges considerably, but resulted in no significant improvement when using RCM data. This was due to the fact that the seasonal distribution of P and T already was significantly improved when comparing the RCM data to the GCM output. The largest difference between the simulations using POD and bias correction was found for extreme runoff, while the two methods gave comparable results for mean annual runoff volumes.

Schmidli et al. (2006) presented a P correction method, where the bias correction was applied by comparing the long-term mean P intensity of observations and as simulated by a GCM, instead of the mean P amount. First, a threshold correction was applied to correct the rainfall frequency, and second, the GCM P intensity on wet days was adjusted by the mean intensity bias. Using intensity-based bias correction was superior to using mean P for both of the GCM data sets. However, a second modification of the method, which entailed the estimation of the scaling factor on the basis of the large-scale atmospheric circulation, only improved the bias-correction method for one of the GCM data sets. This indicated that the effectiveness of the second adaptation depended on whether the model biases were flow-dependent or not. Whether the bias correction was improved when applied on a monthly or seasonal basis instead of annually also differed for the two data sets, with one data set showing considerable seasonal biases. One limitation of this technique was that the bias-corrected GCM data did not reproduce regional processes, which affect higher-order statistics. On the other hand, a comparison with RCMs showed that RCMs did improve the representation of higher quantile P events.

In this study, a refined bias-correction method originally proposed by Yang et al. (2010) is applied, where the bias-correction values depend on the intensity of the daily P. Consequentially, this application differs from the method proposed by Schmidli et al. (2006), where a fixed bias-correction value is used for all intensities because this scaling method creates the possibility to bias-correct extreme values differently from the mean. In this paper, the bias-correction method is referred to as distribution-based scaling (DBS), in correspondence with the terminology used by Yang et al. (2010).

A detailed description of the DBS methodology is given by Piani et al. (2010). In their study, the DBS method was validated using observed and raw and corrected RCM-simulated daily precipitation fields for Europe. Two decades furthest apart were selected, namely 1961–1970 and 1991–2000, to maximize the exposure of possible weaknesses and to test the method on data from a pre- and post-climate shift period. Piani et al. (2010) showed that the method not only performed well for correction of the mean and variance of precipitation intensity, but also for a number of indices that are sensitive to the autocorrelation of the time series, such as a meteorological drought and heavy precipitation index. The bias-correction transfer function was fitted to annual data, but tested on seasonal data to minimize the number of parameters derived from data. However, for application in study catchments the authors suggest that a seasonal fit of the bias-correction function could result in an improvement of the methodology for the specific catchment. Ines and Hansen (2006) applied a procedure similar to that of Yang et al. (2010) to correct daily GCM rainfall in Kenya relative to a target station. The proposed intensity-dependent correction procedure was superior to a simple multiplicative shift because the shift method left substantial frequency and intensity bias, even though it did a better job at correcting monthly and seasonal P totals. However, application of the bias-corrected P in a maize yield simulation showed the sensitivity of the yield model to the stronger autocorrelation of the GCM P compared to observations.

The advantages of the approach suggested by Yang et al. (2010) are that the method (i) better represents the change in variability as simulated by the RCMs and preserves this signal when transferred to key hydrometeorological variables, (ii) can be applied in a transient (continuous) mode, and (iii) takes the covariance between P and T into account. The main disadvantages are that it is more laborious than POD and that it is very sensitive to the combined performance of the GCM and the downscaling RCM. Yang et al. (2010) found that when using DBS the influence from the future climate on hydrological conditions was more noticeable than when using POD.

Our bias-correction method differs from the method proposed by Yang et al. (2010) in a few important aspects. First of all, Yang et al. (2010) found that applying different scaling curves for low and high intensity P greatly improved the reproduction of the full range of the precipitation distribution. In this study, a single distribution fitted satisfactorily to all rainfall events, and it was deemed preferable to avoid the use of an arbitrary cut-off value for a second distribution. Second, Yang et al. (2010) used a more advanced bias-correction method for T because this was important for the determination of snow melt and the peak discharge in their catchment. Different distribution parameters were fitted to T depending on whether it was a dry or wet day. In this study catchment, the contribution of snow is of less importance; therefore, the standard bias-correction method using a monthly bias correction is used. Furthermore, in this study DBS is also applied to reference evapotranspiration (ETref), whereas in Yang et al. (2010) evapotranspiration is determined using a temperature-based method.

The aim of this study is to compare the application of the POD and DBS methods on RCM data at 12-km resolution to determine whether and to what extent the choice of method affects the simulated changes in recharge, groundwater levels, and stream discharges. First, the biases for uncorrected RCM P, T, and ETref for the control period are determined by comparison with observations, and then the performance of DBS is evaluated by comparing the bias-corrected meteorological variables for the control period to observations. Second, the projected changes in P, T, and ETref are presented for the scenario period using both methods. Following the analysis of the meteorological input, the hydrological simulations for the control period using observations and uncorrected and DBS-corrected RCM data are presented, and the projected changes in the scenario simulations using both methods are compared. Currently, we know of no studies where DBS is applied to both P and ETref. Also, the use of an integrated surface water–groundwater model makes it possible to study the effect of the transfer method on the propagation of the climate change signal throughout the entire hydrological system.


Study Area

The study area is a catchment located on the west coast of Jutland, Denmark (Fig. 1 ) with an area of 5459 km2. This area was selected because the results of this study can then be compared with those of previous studies (van Roosmalen et al., 2007, 2009), which focused on the effects of climate change on hydrology in the same catchment, but only used POD to transfer the regional climate model data to the hydrological model. Western Jutland has a maritime climate, dominated by westerly winds and frequent passages of extratropical cyclones. The dominant westerly wind results in mild winters and relatively cold summers with highly variable weather conditions characterized by frequent rain and showers. Autumn is the wettest and spring is the driest season. The area is relatively flat, with maximum elevations around 125 m above sea level. The eastern boundary of the study area is formed by the Jutland Ridge and the western boundary by the North Sea, which results in the topography sloping gently from east to west. The northern and southern boundaries are delineated on the basis of local water divides.

Due to the highly permeable soils, all P outside the wetlands infiltrates, and the discharge to the streams is dominated by groundwater flow. The central and northern part of the catchment is drained by the Skjern River system, while the southern part is drained by two smaller stream systems, Varde Stream and Sneum Stream. The subsurface of the area is dominated by glacial outwash sand and gravel of Quaternary age with isolated islands of Saalian sandy till. Alternating layers of mica clay, silt, and sand, together with quartz sand and silt of Miocene Age, underlie the Quaternary deposits. The Quaternary and Miocene sand formations often form large interconnected aquifers. A more elaborate description of the geology and hydrogeology of the area can be found in van Roosmalen et al. (2007, 2009).

Meteorological Data

Regional Climate Model Data

Meteorological data generated by the regional climate model HIRHAM4 (Christensen et al., 1996) is used in this study. In this experiment, HIRHAM4 is nested in the GCM HadAM3H (Hudson and Jones, 2002) developed by the Hadley Centre. One HIRHAM4 simulation year consists of 360 d, which is dictated by the HadAM3H global model. Two 30-yr periods of daily values are available—one represents the current (control) climate for the period 1961–1990 and the other the projected (scenario) climate for the period 2071–2100. The radiative forcing in the scenario period is based on the IPCC's SRES A2 scenario (IPCC, 2000). The computational grid is a rotated regular 0.125° (∼12 km) latitude–longitude grid enclosing Europe and part of the North Atlantic. P and T at 2 m above the ground surface are direct outputs from HIRHAM4, whereas ETref is calculated using the Penman–Monteith formula described by Allen et al. (1998) and output from the HIRHAM4 model.

Observed Data

Observed daily P and ETref are available for each 40 by 40 km grid (Fig. 1) for the period 1971–1990 (Henriksen et al., 1998; Plauborg and Olesen, 1991; Scharling, 1999). ETref is estimated using the empirical Makkink equation (Makkink, 1957), and P is corrected for wetting and aerodynamic effects using the standard correction methods of Allerup et al. (1998). Table 1 shows observed mean annual corrected P and ETref for the period 1971–1990, and Fig. 2 shows mean monthly P and ETref for 1971–1990, averaged for all observational grid cells in the catchment. Ideally, the daily observed values should cover the entire RCM control period to reduce the effect of extreme years. However, van Roosmalen (2009) found only small differences between the mean annual and monthly P and ETref values for the period 1971–1990 and for a grid of equal size (Scharling, 2000) for the period 1961–1990.

Daily T data (1961–1990), observed at two stations in the catchment (see Fig. 1), are used in the hydrological simulations because daily gridded values were not available. T is used in the hydrological model to determine whether P is liquid or solid and how fast the solid P melts. Hence, T mainly influences the timing of infiltration of snow melt to the unsaturated zone and not the infiltration volume. Figure 2 shows mean monthly T data at station 6080 and 6104 with mean annual T values of 8.1 and 7.4°C, respectively.

The observation and model grid size mismatch can be an issue when applying DBS. First, there is an effect on the observed data when the point data are aggregated to grid data. The calculation of the observational grid data was performed by the Danish Meteorological Institute. For precipitation for the period 1971–1990 approximately 300 stations were used for the whole of Denmark, corresponding to approximately 11 stations per 40 by 40 km grid cell. Daily grid and station values were not compared in this study, but Scharling (1999) compared gridded monthly and annual mean precipitation to station data for 1998 for the whole of Denmark. The monthly difference was <2 mm, and the mean annual value from the grid data was 2 mm less than that from the station data. It is therefore assumed here that the effects of the interpolation method on the station data are negligible for the purpose of this study. However, a detailed, spatially distributed comparison of daily grid and station values was outside the scope of this study.

Second, the available observed data set is not optimal for the application of DBS because there is a difference in the spatial resolution between the observational grid of 40 km and the RCM grid of 12 km. This affects the DBS because it is assumed that the difference between the PDFs is mainly due to errors in the model simulation, but in this case the deviating spatial resolution of the observational and simulated data may also contribute to the difference. Van Roosmalen (2009) investigated the effect of grid resolution on daily P data for the period 1990–2004, where 40- and 10-km gridded P values using the same station data and interpolation method were compared. A comparison of mean annual P for the 40- and 10-km grids showed no significant differences (around 0.4% for most cells), and the mean annual water balance is therefore not likely to be affected by the difference in resolution. Little difference was found between the histograms except for the lowest bin (0.0–1.0 mm d−1), where the 40-km data showed a slightly higher occurrence.

Bias-Correction Methods

One of the advantages of POD is that a bias correction of the RCM data is not necessary because the change in variables between the scenario and the control period is used and the bias is assumed equal for both the control and scenario simulations. Another advantage of POD is that an observed database is used as the baseline resulting in a consistent set of scenario data, whereas the use of climate model output directly could result in unrealistic dynamics in input variables, due to climate model inaccuracies. On the other hand, the use of an observed database is also a drawback of this method because information on the changes in variability and extremes in the future climate, as simulated by the climate model, is lost. It is therefore of interest to compare the results for the hydrological simulations, when using the different methods. The two bias-correction methods are explained in detail below.

POD Method

The POD (delta change) method is a commonly applied method to cope with biases when using climate model output in hydrological impact studies (Xu et al., 2005). The POD method consists of altering an observed database of meteorological variables with delta change values to obtain a database for the future (scenario). Here, monthly delta change values for P, T, and ETref are determined from the HIRHAM4 model simulation output. For T the procedure is as follows:
where TPOD is temperature input for the hydrological scenario simulation, Tobs is observed temperature in the historical period, (i, j) stand for day and month, respectively, and ΔT is the change in temperature as simulated by the climate model. This value is calculated as
is the mean daily temperature for month j, which is calculated as the mean of all days in month j for all 30 yr of the reference period. The indices “scen” and “ctrl” stand for the scenario period (2071–2100) and control period (1961–1990), respectively. The delta change values are averaged over the 47 grid cells covering the land surface in the hydrological model area to generate one set of delta change values for the entire catchment. This results in 12 monthly delta change values, which are used to adjust the observed daily T of the individual months to future T input.
POD for P can be described by the following equations:
where PPOD is precipitation input for the hydrological scenario simulation, Pobs is observed precipitation in the historical period, (i, j) stand for day and month, respectively, and ΔP is the change in P calculated as:
where PPOD is the mean daily precipitation for month j and “scen” and “ctrl” stand for the scenario and control period, respectively. ETref for the future scenario is calculated similarly as P.

Bias Correction of RCM Output

Precipitation. Most RCMs are subject to a systematic error in P. The result is that too many days with very low P intensity and too few dry days are generated. To correct for the superfluous drizzle, a threshold is determined below which all P is set to zero. This threshold is determined by calculating the percentage of dry days in the observational data set, which is defined as days with P less than 0.1 mm. The control period RCM data values are then ranked, and the P value corresponding to the percentage of dry days in the observational data set is selected as the threshold value. For both the control and scenario data set, all RCM simulated P values below this threshold are set to zero. The threshold values and the total amount of P that is removed using this procedure are presented in the results section.

Precipitation intensity determines the division between surface runoff and infiltration. As a consequence, the intensity distribution significantly influences the water balance of the catchment. The bias-correction approach for P should therefore not only correct for mean P amounts, but also scale P values depending on their intensity. The second step of the adopted approach, after threshold correction of the RCM data set, is to fit statistical distributions to the PDF of daily values of the observed data set and the threshold corrected RCM control period data set. The γ distribution is used because it provides a good approximation to the asymmetrical and positively skewed properties of the P data set (Wilks, 2006). The fit is only performed for wet days because the bias correction is not applied to dry days. The PDF of the γ distribution is defined as follows:
where x is precipitation (mm d−1), α and β are shape and scale parameters of the γ distribution, and Γ(α) is the γ function:
which in general must be evaluated numerically (Wilks, 2006). Two sets of parameters are determined using the maximum likelihood method: one set for the observations (αobs and βobs) and another set for the RCM control simulation (αctrl and βctrl). Through an iterative procedure the parameter set that maximizes the log-likelihood function is found using the multidimensional generalization of the Newton–Raphson method (e.g., Press et al., 1986).
The estimated PDF for the RCM control period P is subsequently used to find the probability of a certain P value. Feeding this probability into the PDF for the observations yields the corresponding P value. This can be described by the following equation:
where xcorr is the bias corrected RCM daily precipitation value to be used in the hydrological simulation, f represents the PDF of the γ distribution, and fctrl, βctrl, xRCM) is the probability of the precipitation value xRCM using the PDF fitted to the γ distribution of the control period RCM values.

The xRCM value can either be RCM precipitation for the control or scenario period, implying that the same bias correction is applied to both the control and scenario values. This is a valid approach, if it is assumed that the model biases are the same for the control and scenario period (as for the POD method). Thus, the RCM control and scenario P values are corrected similarly, but because of the changes in the P distribution, as simulated by the RCM, the resulting PDFs of the corrected values are expected to differ for the two periods. Therefore, the DBS method is very sensitive to the performance of the bias correction in the control period because this is used as verification of the bias correction. If the bias correction is optimal in the control period, then it is assumed that the remaining signal after bias correction of the scenario data is only due to climate change and that the signal is no longer blurred by model bias.

The two-step correction method is applied to all simulated P values of both the control and the scenario periods. The determination of the threshold value and the fitting of the PDFs are performed for each RCM cell and observational cell separately. When an RCM cell falls under two observational grid cells, it is assigned to the observational grid cell, where the largest part of the surface area falls under. As mentioned above, the observed data set is not optimal because it is at 40-km resolution, while the RCM output is at 12-km resolution. However, in this study the RCM data are not resampled to lower resolution because little difference was found for observational grids at 10- and 40-km resolution (van Roosmalen, 2009).

Reference Evapotranspiration. For ETref the same correction method is applied as for P, namely fitting the γ distribution to the observations and the RCM control period values, but no threshold correction is applied to the RCM ETref data before fitting the PDF. The γ distribution is used instead of a normal distribution for evapotranspiration because for large values of α, the γ distribution closely approximates a normal distribution, with the advantage that the γ distribution has density only for positive real numbers (Wilks, 2006).

Temperature. It is not possible to correct T using DBS because daily grid values of observed T are not available for the period 1961–1990. Therefore, a standard monthly correction value is applied to all daily values:
where Tscaled is temperature input for the hydrological simulation, TRCM is RCM simulated temperature, (i,j) stand for day and month, respectively, and εT is the monthly bias, when comparing RCM simulated temperature to observations. This value is calculated as:
where T̄obs (j) is the mean daily observed temperature for month j, calculated as the mean of all days in month j for all 30 yr of the reference period (1961–1990). T̄ctrl (j) is the mean daily RCM simulated temperature for month j in the control period (1961–1990). εT (j) is calculated for each RCM cell individually, using observed T data for the closest station. RCM T values for both the control and scenario period are corrected using the same monthly bias-correction values.

Hydrological Model

The hydrological model used in this study is part of the National Water Resource Model for Denmark, which is described in Henriksen et al. (2003) and Sonnenborg et al. (2003). The particular submodel used in this study is also described in van Roosmalen et al. (2007, 2009). It is a transient, spatially distributed, groundwater–surface water model based on the MIKE SHE code (Abbott et al., 1986; Refsgaard and Storm, 1995). Three-dimensional groundwater flow is solved based on Darcy's law, using detailed input on the distribution of the geology and associated hydraulic properties. A lot of attention has been paid to the description of the geological settings in the model, such that groundwater flow can be described at high spatial resolution.

Interaction between groundwater and rivers is described as either base flow or drain flow. Base flow is modeled using a Darcy-type relationship between flux and head difference, where a base flow leakage coefficient that depends on the river bottom permeability acts as the controlling parameter. Drain flow is modeled as a linear reservoir, where drain flow is generated if the groundwater table is located above the specified drainage level. The drain system captures flow through small streams, ditches, and drain pipes that cannot explicitly be described in a large-scale model. The drain flow is calculated on the basis of distributed information on drain levels and drainage coefficients and is routed to the nearest river or the sea. River flow is simulated based on river geometry, slope, and the Manning roughness factor using the Muskingum–Cunge routing method (Chow et al., 1988) as implemented in the MIKE 11 model (Havnø et al., 1995). Unsaturated zone processes are computed using a two-layer water balance method (Yan and Smith, 1994) available in the MIKE SHE system (DHI, 2007). A more detailed description of the model setup can be found in van Roosmalen (2009).

Meteorological Data Results

Comparison of Uncorrected and Bias-Corrected RCM Control Period Data with Observations

Observational grid cells 9 and 10 (Fig. 1) better represent the climatology in the north east of the catchment than grid 17 and 18, of which most of the area is located to the east of the Mid-Jutland Ridge and closer to the coast. For this reason, the few RCM cells located in observational grid cells 17 and 18 are bias-corrected as if they were located in observational grid cells 9 and 10, respectively.

Annual Values

Table 2 shows the bias in mean annual values when comparing RCM control period P, T, and ETref to observations (simulated minus observed). For P and ETref the observational annual values are calculated using daily values for 1971–1990 (Table 1). For T climate normals for 1961–1990 (Scharling, 2000) are used to calculate the biases in Table 2 because a data set with daily values for 1971–1990 is not available. The RCM mean annual P and ETref values have been multiplied by 1.015 (365.25/360) to correct for the annual RCM data consisting of only 360 d.

When averaged for all RCM cells in the hydrological model area, the RCM mean annual uncorrected control P is 160 mm (16%) lower than observations. Mean annual uncorrected RCM control ETref and T are 15 mm (3%) and 0.75°C higher than the corresponding observation values, respectively. After bias correction, mean annual RCM control P averaged for all cells is equal to mean annual observed P, but this is partly due to the negative bias for observational grid 4. Mean annual bias-corrected ETref is 4 mm (1%) lower than observed mean annual ETref.

Monthly Values

Table 3 shows the mean monthly and annual biases for uncorrected RCM control P, T, and ETref averaged for all RCM cells in the catchment. To calculate the biases, RCM monthly P and ETref are multiplied by 1.03 (31/30) for months with 31 d and 0.94 (28.25/30) for February. The annual values show small differences in the values presented above because the values above were calculated per observational grid cell and then averaged, while in Table 3 the values are based on the monthly values for all RCM grid cells. RCM simulated T is too high for all months except October and November, with the bias being especially large for the winter months. Table 3 shows that monthly and not seasonal T bias corrections are necessary because the difference between the model biases in December (0.9°C) and February (2.1°C) is relatively large, which can influence the occurrence of snow in the hydrological model considerably. The large negative annual P bias is mainly a result of the RCM simulating too little P in the months September through January. Large positive biases in ETref occurred in September and October, while too low ETref was simulated in May and June.

The RCM daily T values are corrected using the bias-correction values corresponding to the particular RCM cell (Eq. [9]). Hence, the mean monthly bias-correction values averaged for all RCM cells in the model area (equal to the mean monthly bias) are in principle found in Table 3. However, the calculation of the bias-correction value (Eq. [9]) is performed for each RCM cell separately by comparing to the station data. The result is that the mean monthly control period T for each RCM cell is equal to that of the station, but the distribution of daily T values may differ.

Table 4 shows the mean monthly biases for bias-corrected RCM control P and ETref averaged for all RCM cells in the catchment. At first, a fit on an annual basis was performed for P, which resulted in acceptable bias correction for some months (see Table 4). However, the method performed poorly for the wettest months, especially in autumn (September–November). Therefore, the fitting is performed for four seasons—winter, spring, summer, and autumn—because it is likely that the RCM simulation errors vary with the dominant weather type for each season. It is possible that fitting the PDFs on a monthly basis could improve the bias correction for the control period, but the smaller number of daily values when using monthly data could result in a less representative fit to the PDF. For P, the bias correction mainly reduces the magnitude of the negative bias during autumn and winter, while bias correction does not reduce the biases in spring P. For ETref, the too high evapotranspiration amounts in autumn and winter are reduced, whereas in other seasons both increases and decreases in the biases occur.

Daily Temperature

To investigate the daily meteorological values, the observations for observational grid 10 and the uncorrected RCM data for all 12 RCM cells within the observational grid cell are compared. Figure 3 shows the histogram of T for observational grid cell 10 (station 6104) (1971–1990) and the mean of all histograms for uncorrected and bias-corrected RCM control period T for the 12 RCM cells within observational grid 10. Mean observed T for grid 10 is 7.4°C, while for the uncorrected and bias-corrected RCM data the mean T is 8.3 and 7.4°C, respectively. Maximum and minimum daily T equals −17.4 and 26.0°C, −14.1 and 27.2°C, and −15.6 and 25.5°C for observed, uncorrected RCM, and bias-corrected RCM T, respectively. Figure 3 shows that the shape of the histogram of bias-corrected RCM T is more similar to the histogram for observations than the histogram for uncorrected RCM data. Furthermore, the bias-correction method improves the relative frequency of near-zero temperatures. On the other hand, the relative frequency of temperatures between −8 and −1°C in the bias-corrected data set is higher than for observations and the uncorrected data set, which can affect near-zero temperatures in the scenario climate data set.

Daily Precipitation

The first step in the bias-correction method for P is the correction of superfluous drizzle in the RCM data set. The threshold values averaged for all RCM cells are 0.28 (0.17–0.40), 0.62 (0.55–0.67), 0.51 (0.47–0.56), and 0.16 (0.11–0.21) mm d−1 for winter, spring, summer, and autumn, respectively. The removal of all P below the threshold results in a reduction in mean annual uncorrected RCM control period P of −1.8% (−1.61 to −1.98%). The second step is the correction of daily P depending on the intensity. Figure 4 shows the histogram of P for the observations (1971–1990) in grid cell 10 and the mean of all histograms for uncorrected and bias-corrected RCM control period P for the 12 RCM cells within observational grid 10. Only wet days (P ≥ 0.1 mm d−1) are included in the analysis to reduce the effect of the superfluous drizzle simulated by the RCM on the relative frequency in the 0.0–1.0 mm d−1 bin. When using only wet days, 32% of the observations occur in the 0.0–1.0 mm d−1 bin, while for the uncorrected RCM data around 38% of the P events occur in this bin.

The RCM simulates too much of low intensity P and too little of high intensity P. The bias correction of P does not significantly improve the frequency distribution of P events in the range 2 to 8 mm d−1, but does improve the representation of higher intensity P events. Mean daily P (only wet days included) in observational grid 10 for corrected RCM data (4.5 mm d−1) is much closer to observations (4.6 mm d−1) than the uncorrected RCM data (2.9 mm d−1). Maximum daily P of corrected RCM data (63 mm d−1) is higher than the observed maximum (54 mm d−1), while the uncorrected value is too low (47 mm d−1). The 5% exceedence value for the corrected RCM data occurs in the bin 17 to 18 mm d−1, which is the same intensity as for observations, while for the uncorrected RCM data this value occurs in the bin 11 to 12 mm d−1.

Daily Reference Evapotranspiration

The mean daily ETref in the bias-corrected RCM data set is 1.46 mm d−1, which is very close to the observed mean of 1.47 mm d−1, but the mean of the uncorrected RCM data set (1.52 mm d−1) is also relatively close to the mean for observations (3% higher). Figure 5 shows the histogram for observed ETref in observational grid cell 10 for the period 1971–1990 and the mean of all histograms of uncorrected and bias-corrected RCM control period ETref for the 12 RCM grid cells located within the 40 km cell (only bins up to maximum observed ETref are shown). It can be seen that the frequency of lower intensity evapotranspiration events is improved, while for a few bins (1.75–2.75 mm d−1) the bias-corrected ETref is worse than for the uncorrected data set. Both maximum uncorrected (11.4 mm d−1) and corrected (11.2 mm d−1) RCM ETref values are much higher than observed ETref (5.6 mm d−1) (not shown in Fig. 5). This is in agreement with findings by Ekström et al. (2007), where values up to 25 mm d−1 were found for northwest England. However, it was decided not to replace the Penman–Monteith equation with a different approach in this study because the number of days with ETref higher than 5.75 mm d−1 in the bias-corrected RCM data set is only 0.8%. The 5% exceedence value occurs in bin 4.25–4.5 mm d−1 for observations and the corrected data set, while it occurs in bin 4.00–4.25 mm d−1 in the uncorrected RCM data set.

Scenario Input

Delta Change Values

The delta change values were calculated using the average of all RCM cells in the hydrological model area (Table 5 ). An alternative could have been to calculate the delta change values for each observational grid cell separately, but it is more important to detect a strong regional climate signal for the whole catchment, which is less affected by noise than could be the case when using smaller samples of RCM cells. The spatial distribution of the meteorological variables still prevails because the input to the hydrological model is based on the meteorological data for each observational grid cell, but then perturbed by the same delta change values. Comparison of the delta change values in Table 5 with values calculated for the whole of Denmark (van Roosmalen et al., 2007) shows only small differences, namely up to 0.1°C for T, 0.07 for P, and 0.04 for ETref.

Annual and Monthly Values

When using POD, mean annual T, P, and ETref increase with 3.1°C, 115 mm (11.5%), and 100 mm (18.6%), respectively, while for DBS the increases are 3.1°C, 134 mm (13.5%), and 96 mm (17.9%). Figure 6 shows monthly changes for T, P, and ETref, when comparing the scenario period to the control period after application of the POD and DBS methods. For POD the changes are determined by comparison with observations, while for DBS the changes are calculated by comparing the RCM bias-corrected data for the scenario period with that of the control period. Comparison of the bias-corrected RCM scenario period data with observations is more difficult to interpret because the absolute change in the variable would include both the change as simulated by the RCM and the monthly bias, which still remains after bias correction (see Table 4).

The largest projected increases in T occur from August to November for both methods. The high delta change factors for P during the winter months combined with the high P amounts during these months in the observational data set (Fig. 2) results in high absolute increases in P for POD, although DBS also shows large increases in winter (Fig. 6). Significant relative decreases in P occur in the months August and September, but the absolute decreases are smaller than the absolute increases during winter. Generally, DBS shows larger increases in P than POD in the months when P is projected to increase and larger decreases in the months when P is projected to decrease. The relative increases in ETref during winter are much higher for POD than for DBS, but, due to the very low observed ETref during these months (Fig. 2), this only has a small effect on the absolute increase in ETref. On the other hand, large absolute increases in ETref are projected for August and September by both methods.

Daily Temperature

Comparison of the daily T values for the scenario period using the two methods shows that the distributions of the relative frequency of daily T values are quite similar (Fig. 3). Some differences occur in the bins with high relative frequency, and the POD method shows higher frequencies for very high T values than DBS. Not surprisingly, the shape of the histogram of daily scenario T values for POD is very similar to that of observations.

Daily Reference Evapotranspiration

The relative frequency of evapotranspiration events between 0.0 and 0.25 mm d−1 decreases in the scenario period for both methods (Fig. 7 ). DBS shows the highest frequency in the bins 0.25 through 1.0 mm d−1, but lower relative frequency from 1.25 to 2.75 mm d−1. As discussed above for the control period, the bias-corrected A2 scenario data show evapotranspiration events in high-intensity bins, where the relative frequency using POD is zero.

Daily Precipitation

Figure 8 shows the histogram of observed daily P (1971–1990) for a 40-km grid cell (number 10) and the A2 scenario daily P using POD and DBS. As for T, the relative frequency of P using POD is very similar to that of observations, although increases in the higher bins occur, due to multiplication of the highest daily P events with a delta factor generally higher than 1.0. A comparison of the absolute changes in relative frequency per bin for the A2 scenario with bias-corrected control P shows that the relative frequency of P events in the lower bins up to 4.0 mm d−1 decreases (range of decreases is −0.5 to −1.6%), while increases occur in the bins with P intensity above 4.0 mm d−1. When comparing the scenario P for POD to DBS, the relative frequency of P in the lower bins is quite similar for both methods, although POD results in 3% more events in the bin with 0.1 to 1.0 mm d−1P and generally more P between 5.0 and 9.0 mm d−1. DBS results in slightly more P in the bins 1.0 through 5.0 mm d−1 and in the very high P intensity bins (>40 mm d−1).

Hydrological Results

The hydrological simulations using observations (1971–1990) and POD for the scenario period span a period of 20 yr. The hydrological simulations using the RCM data for the control (1961–1990) and scenario (2071–2100) period consist of 30 yr. The results presented in this section are based on the entire 20- or 30-yr period of the transient simulations. To establish appropriate initial conditions, the model is run three times, of which the second and third runs use the output of the previous run as initial conditions. The results are extracted for the third run only. When comparing the scenario period with the control period, the changes described for DBS are based on a comparison with the control period simulation using bias-corrected RCM data, while POD results are based on a comparison with the simulation using observations.

Total Water Balance

The most important components of the mean annual total water balance, spatially averaged for the whole catchment, are presented in Table 6 for the simulations using observations and the uncorrected and corrected RCM control data for the control and scenario period. The net recharge is defined as the outflow from the root zone minus the sum of evapotranspiration and net flow from the groundwater zone to the overland compartment for the grids where the soil profile is completely saturated. The net horizontal boundary outflow is the net outflow across the catchment boundary and accounts primarily for groundwater flow to the sea. Drain flow includes drainage from groundwater to rivers and drainage to the sea in coastal regions. Base flow is the net flow of groundwater to rivers.

Control Period

Bias correction of the RCM data improves the simulation of the main components of the water balance considerably. After correction the mean annual recharge is 1.6% higher than observations, whereas it is 35% lower than the observations before correction. After correction drain flow and base flow are 3 and 1% higher than observations, respectively, which is a significant improvement compared to the 46% error in drain flow and 25% error in base flow for the uncorrected RCM data. The higher recharge and discharge values for the bias-corrected RCM control period compared to observations is most likely due to the bias-correction method resulting in too low ETref values compared to observations (Table 3) because mean annual RCM P shows little to no positive bias, when compared with observations. Also, the monthly distribution of the P bias cannot explain the positive bias in recharge because relatively large negative biases in RCM corrected control period P occur in the months November, December, and March (Table 4). In these months nearly all P contributes directly to groundwater recharge (Fig. 9 ), due to the saturated soil conditions. However, Fig. 9 shows that for most months the temporal distribution of the recharge for the simulation with corrected RCM data is very similar to that of observations and is greatly improved compared to the uncorrected RCM data.

Scenario Period

Mean annual recharge increases for both methods, but DBS results in a slightly higher increase than POD, namely 76 mm (15%) and 62 mm (12%), respectively. DBS also results in greater increases in drain flow, 62 mm (24%), than POD, 49 mm (19%). The increases in base flow are very similar, with increases of 15 mm (6%) and 16 mm (7%) for POD and DBS, respectively. Figure 9 shows that the mean monthly recharge in the scenario period increases in winter and decreases in summer, with very similar values for both methods. The main difference between the monthly recharge for the two methods occurs in the months when the highest recharge happens.

Mean Annual Groundwater Head

Two numerical layers in the model are analyzed here: Layer 1, which is the upper, unconfined aquifer, and Layer 5, the deeper, main aquifer in the area. For the simulations using observations, uncorrected RCM data, and corrected RCM data, mean annual groundwater head in Layer 1 equals 33.44, 32.42, and 33.53 m, while for Layer 5 values of 31.33, 29.94, and 31.45 m are found, respectively. In the scenario period, groundwater heads show slightly higher increases for POD than DBS. Layer 1 shows increases of 0.27 and 0.20 m and Layer 5 shows 0.38- and 0.30-m increases when using POD and DBS, respectively. This is a surprising result because the changes in mean annual recharge were slightly higher for DBS compared to POD. Figure 10 shows the absolute changes in mean annual groundwater head when using POD and DBS.

Stream Discharge

Table 7 shows mean annual discharge at six discharge stations (Fig. 1) for the simulations using observations and uncorrected and corrected RCM control period data and the results for the scenario period using DBS and POD. For the control period it can be seen that the bias correction improves the simulation of the mean annual discharges significantly. Mean annual discharges are slightly overestimated by the RCM simulation, which is mostly the result of the higher drain flow and to a lesser extent the higher base flow (Table 6) compared to the simulation using observations. The largest relative differences between simulated discharge using bias-corrected data and observations occur at discharge stations 25.11 (5%) and 25.08 (17%), but for station 25.08 the high relative value is mainly due to the low absolute value of mean annual discharge.

Table 8 shows the quantiles at three discharge stations for the simulations using observations, bias-corrected RCM control period data, and A2 scenario data using POD and DBS. The RCM control period quantile values are very close to the simulated values using observed data, although there is a slight tendency to overestimate discharges up to the median value, while the highest quantile value is underestimated. The overestimation of discharges below the median is likely the result of the RCM control period simulation generally showing higher annual discharges than when using observations (Table 7) and because bias-corrected control period P has a higher frequency than observations in low-intensity bins (Fig. 4). However, to explain the underestimation of the 95% quantile, further analysis of extreme P intensity and duration is needed, but this is not performed here because the differences in the values are relatively small.

Similar to the control period simulations, the scenario period simulation using RCM bias-corrected data results in higher discharge values than for the scenario simulation using POD, except for the 95% quantile. When comparing the POD and DBS scenario simulations with observations and control period data, respectively, daily discharges up to and including the median show no change or relatively small increases, while larger increases occur for the higher quantile values. POD shows slightly higher increases for the lower quantiles, but for the higher quantiles no distinct difference between the two methods can be seen. However, the histograms of daily discharges for station 25.14 for the control and scenario period simulations using POD and DBS (Fig. 11 ) show that for higher discharges, DBS shows larger increases than POD. Further analysis is needed to determine whether this larger increase in high discharges for the DBS method is caused by the larger increase in high-intensity precipitation events, when comparing DBS and POD (Fig. 8).

Figure 12 shows mean monthly discharge for the simulations using observations, bias-corrected RCM control data, and A2 scenario data using DBS and POD. For the scenario period, the peak discharge occurs in January when using POD, whereas the peak occurs in February for DBS. For most summer months, DBS results in larger relative decreases than POD. This could possibly be due to an increase in the occurrence or duration of droughts, as projected for scenario climates, but further analysis is needed to determine this.


Projected changes in groundwater and stream discharges for a large-scale, agricultural catchment were investigated through simulations with a distributed, physically based, surface water–groundwater model. The hydrological model was driven by P, ETref, and T data of the HIRHAM4 regional climate model (RCM) at 12-km resolution for the control period 1961–1990 and the scenario period 2071–2100 using POD and DBS. The aim of this study was to determine whether and to what extent the choice of bias correction or transfer method affected the projected changes in the hydrological system.

Bias correction significantly improved the representation of the historical, observed climate by the RCM data for the control period. After bias correction, only very small biases were found for the mean annual RCM control period data, but some months still showed more considerable biases. Also, the distribution of the relative frequency of daily, bias-corrected RCM data differed slightly from observations. Hence, the bias-correction method resulted in RCM control period data closely approaching the observed climate data, but small biases remained, which could possibly affect the simulated hydrological variables. Comparison of the water balance for a simulation using bias-corrected RCM control period data with a simulation using observations showed slightly higher mean annual groundwater recharge (2%), drain flow (3%), and base flow (1%), resulting in slightly higher mean annual groundwater levels and stream discharges.

When the projected changes in hydrological variables for the scenario period simulations using POD and DBS were compared, only small differences between the two methods were found. Mean annual recharge increased for both methods, but DBS resulted in a slightly higher increase (15%) than POD (12%). DBS also resulted in larger increases in mean annual drain flow (24%) than POD (19%), while the increases in base flow were similar (7%). Contrary to the results for groundwater recharge, mean annual groundwater heads showed slightly higher increases for POD than DBS, namely 0.27 m compared to 0.20 and 0.38 m compared to 0.30 m, for two layers in the model. The larger increase in drain flow and the smaller increase in groundwater head for DBS compared to POD are possibly due to the groundwater levels for the current climate being higher for DBS than for POD. This results in groundwater reaching the drains sooner for the DBS method than for POD, thus resulting in more drain flow and a smaller increase in groundwater levels. For both methods, daily stream discharges up to and including the median showed no change or relatively small increases, while larger increases occurred for the higher quantile values. POD showed slightly higher increases for the lower quantiles than DBS, but for the higher quantiles, no consistent difference between the two methods occurred.

From this study it can be concluded that the choice of bias-correction method did not have a significant influence on the projected changes of mean hydrological responses in the catchment. The methods resulted in differences in projected changes in meteorological variables, but the variations in the projected distributions and the absolute changes were not large enough to impact the processes in the hydrological system in a significantly different manner. However, this conclusion is only valid for projected changes in the mean because changes in extremes were not analyzed here. Further analysis is needed to determine differences in the methods with respect to, for example, flooding (extreme discharges), irrigation (droughts), and leaching of nitrate or pesticides (extreme P). Furthermore, the insensitivity of the hydrological results to the choice of method could be caused by the characteristics of the hydrological system, making it less susceptible to changes in the probability density function of the meteorological variables. For example, stream discharge in this catchment is dominated by groundwater processes, which are not directly coupled to P intensity. Also, increases in high-intensity P events may not significantly affect surface runoff or drain discharge because the vertical hydraulic conductivity of the unsaturated zone and the drainage capacity are sufficiently large to cope with increased P intensities. Comparative studies in catchments with other dominant hydrological characteristics are necessary to further investigate how these characteristics influence the choice of bias-correction or transfer method.

Data were provided through the PRUDENCE data archive, funded by the EU through contract EVK2-CT2001-00132.

Freely available online through the author-supported open access option.