The catchment data sets used in GREAT-ER are large and relatively complex structures. All catchments included in GREAT-ER Version 1.0 were primarily processed automatically. Several routines have been developed to aid the addition of a new catchment. Next to some smaller software tools, a full Geographical Information System ( ARC/INFO) is required and hence the data processing routines are not included in the GREAT-ER 1.0 package. Please contact ECETOC for further information.
As a first insight a general overview of the processing steps is given here:
For such complex and comprehensive data as required to build a GREAT-ER catchment data set, it is only to be expected that the obtainable data are existent in several different formats and/or resolution. This problem is solved by the definition of an intermediate file format. For each data group a GREAT-ER pre-defined format is given. The required files have to be produced manually from the original raw data or with special routines which most likely have to be developed from scratch.
Figure 2.31: Two steps data processing
Once you have converted all the required data, the second step is quite simple. Most of the work is done automatically via makefiles and several scripts. These will direct format conversions, data joining, etc. and finally the installation of the new catchment.
Performing these steps (especially step 1) requires some basic knowledge about software engineering and, of course, about the usage of the applied software.
All routines used for the production of the currently included catchments are available as source codes (e.g. for MicroLowFlows datasets) and can be studied.