Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Software Requirements

  • Fortran 90/95 and C compiler

  • MPI or OpenMP libraries

  • netCDF4 library linked with HDF5 and zip libraries and extended by the Fortran netCDF package (The netCDF4 package comes with the programs ncdump and nccopy)

  • UNIX utilities: make, ksh, uname, sed, awk, wget, etc.

  • For post-processing: Climate Data Operators (CDO) and netCDF Operators (NCO)

  • An ICON binary

Info

SPICE uses netCDF I/O of ICON-CLM

Create an ICON Binary

The ICON source code does not come with SPICE because it is not distributed by the CLM-Community. Copy the latest ICON source code from the ICON download page to your working directory. If you do not yet have access, i.e. no private license, go to the License page first.

...

Code Block
languagebash
$ xz -d icon-2.6.5.tar.xz
$ tar -xzf icon-2.6.5.tar
$ cd icon-2.6.5

When successful, patch the code by using this patch. Moreover, retrieve this configure wrapper dedicated for the DKRZ machine. To download both, patch and wrapper, copy the file 🔑 icon-2.6.5.patch.gz tarfrom the CLM-community RedC pages, unpack it and read the README file. Copy the wrapper to the subdirectory ‘config/clm/’. Then create your build directory and compile ICON:

Code Block
languagebash
$ mkdir build
$ cd build
$ ../config/clm/levante.intel-2021.5.0_ecrad_2.6.5_lessoptim --disable-coupling --disable-ocean --disable-jsbach
$ LANG=en_US.utf8
$ make -j 8

Install SPICE

1 Get the source code

Goto https://hcdc.hereon.de/clm-community/wiki/wg-suptech/icon-clm/spice/ and download the latest revision, i.e. tag (e.g. 2.0 in the following), as tarball, copy spice-v2.0.tar.gz to your computing system and proceed like so:

...

Code Block
languagebash
$ cd ${SPDIR}/data
$ ./get_spice_rcm.sh

A directory rcm is created holding the necessary data to run the ICON-CLM test experiment.

3 Configure SPICE  and run the test examples

...

Expand
titleConfiguration at DKRZ or DWD

Call the script config.sh  at DKRZ Levante like so

Code Block
languagebash
$ cd ${SPDIR}/configure_scripts
$ ./config.sh -s dkrz-levante

at DWD Nec like so

Code Block
languagebash
$ cd ${SPDIR}/configure_scripts
$ ./config.sh -s dwd-nec

or at CSCS Daint like so

Code Block
languagebash
$ cd ${SPDIR}/configure_scripts
$ ./config.sh -s cscs-daint

This will create two directories including the basic scripts. You find them under ${SPDIR}/chain/gcm2icon/sp001 and ${SPDIR}/chain/icon2icon/sp002 .

Run the test examples

There are two tests, one for testing ICON with GCM or reanalysis data as initial and boundary conditions (${SPDIR}/chain/gcm2icon/sp001) and one for testing ICON with coarse grid ICON data as initial and boundary conditions (${SPDIR}/chain/icon2icon/sp002). Actually sp001 creates the necessary input data for sp002.

Before you start the experiment look for the following environment variables in the job_settings of sp001 and sp002 file and adopt them to your needs.

Code Block
languagebash
PROJECT_ACCOUNT=  # your project account, if necessary on your system
NOTIFICATION_ADDRESS=    # your email of nofification address if you want to get information when your job crashes or finishes
BINARY_ICON=      # ICON executable including full path
ECRADDIR=         # path to the ECRAD data directory, if you plan to use the ECRAD radiation scheme


Info

If you are not running the tests at DKRZ:

Adopt the input directory of the ERAInterim data

Code Block
languagebash
GCM_DATADIR=/pool/data/CCLM/reanalyses/ERAInterim

and probably the de-tar part in prep.job.sh.

Now you should be ready to start the first experiment:

Code Block
languagebash
$ cd ${SPDIR}/chain/gcm2icon/sp001
$ ./subchain start

This experiment is a two month simulation, 50 km / Europe / driven bei ERAInterim

After successful completion start the second one:

Code Block
languagebash
$ cd ${SPDIR}/chain/icon2icon/sp002
$ ./subchain start

This experiment is a two month simulation, 3km / region around Hamburg / driven by ICON output of sp001 . 


...

Expand
titleConfiguration on a not supported computing platform

3.1  Create the supplemental programs

a. Create the fortran library libcsv

This library is used for reading csv data files. The original source code can be found on GitHub under https://github.com/jacobwilliams/fortran-csv-module

Choose a Fopts file in the directory LOCAL and copy it to the base directory of libcsv (here we choose Fopts.dkrz-levante as an example):

Code Block
languagebash
$ cd ${SPDIR}/src/fortran-csv-lib
$ cp LOCAL/Fopts.dkrz-levante Fopts

Adopt the Fopts file to your system and type:

Code Block
languagebash
$ make

After successful compilation you find the libcsv in ${SPDIR}/src/fortran-csv-lib/lib.

b. Create the cfu executable

The climate fortran utilities contain several functions needed in the runtime environment.

Choose a Fopts file in the directory LOCAL and copy it to the base directory of libcsv (here we choose Fopts.dkrz-levante as an example):

Code Block
languagebash
$ cd ${SPDIR}/src/cfu
$ cp LOCAL/Fopts.dkrz-levante Fopts

Adopt the Fopts file to your system and type:

Code Block
languagebash
$ make

After successful compilation you find the cfu executable in ${SPDIR}/src/cfu/bin

c. Create additional conversion programs

The programs are used to convert COSMO-CLM caf-files to ICON-CLM compatible caf-file (ccaf2icaf) and to correct the netCDF output of ICON-CLM (correct_cf).

Choose a Fopts file in the directory LOCAL and copy it to the base directory of libcsv (here we choose Fopts.dkrz-levante as an example):

Code Block
languagebash
$ cd ${SPDIR}/src/utils
$ cp LOCAL/Fopts.dkrz-levante Fopts

Adopt the Fopts file to your system and type:

Code Block
languagebash
$ make

After successful compilation you find the executables ccaf2icaf and correct_cf in ${SPDIR}/src/utils/bin

3.2  Configure SPICE on your computing system

If you intent not to run ICON-CLM at DKRZ or DWD you have to perform some adoptions. First,  find out which batch system comes nearest to your system. DKRZ uses SLURM (i.e. SBATCH commands) and DWD uses the Portable Batch Commands (i.e. PBS commands). Lets suppose as an example in the following that you use SLURM on your computing system and therefore you use "dkrz" as a template.

a. Change into the configure_scripts directory:

Code Block
languagebash
$ cd ${SPDIR}/configure_scripts

b. Adopt the dkrz part in the system_settings.tmpl file to the settings on your system.

c. Run the config.sh script:

Code Block
languagebash
$ cd ${SPDIR}/configure_scripts
$ ./config.sh -s dkrz-levante

This will create two directories including the basic scripts. You find them under ${SPDIR}/chain/gcm2icon/sp001 and ${SPDIR}/chain/icon2icon/sp002

Now comes the hardest part: you have to dive into the scripts in the directories and adopt the scripts to your system (e.g. modify batch commands, program calls etc.). Start with adopting the experiment ${SPDIR}/chain/gcm2icon/sp001  . 

Run the test examples

There are two tests, one for testing ICON with GCM or reanalysis data as initial and boundary conditions (${SPDIR}/chain/gcm2icon/sp001) and one for testing ICON with coarse grid ICON data as initial and boundary conditions (${SPDIR}/chain/icon2icon/sp002). Actually sp001 creates the necessary input data for sp002.

SP001  is a two month simulation, 50 km / Europe / driven bei ERAInterim

SP002 This experiment is a two month simulation, 3km / region around Hamburg / driven by ICON output of sp001 . 

Before you start the experiment look for the following environment variables in the job_settings of and adopt them to your needs.

Code Block
languagebash
PROJECT_ACCOUNT=  # your project account, if necessary on your system
NOTIFICATION_ADDRESS=    # your email of nofification address if you want to get information when your job crashes or finishes
BINARY_ICON=      # ICON executable including full path
ECRADDIR=         # path to the ECRAD data directory, if you plan to use the ECRAD radiation scheme

Adopt the input directory of the ERAInterim data

Code Block
languagebash
GCM_DATADIR=/pool/data/CCLM/reanalyses/ERAInterim

and probably the de-tar part in prep.job.sh.

Now you should be ready to start the first experiment:

Code Block
languagebash
$ cd ${SPDIR}/chain/gcm2icon/sp001
$ ./subchain start

In SPICE the scripts are called in the order prep.job.sh, conv2icon.job.sh, icon.job.sh, arch.job.sh, post.job.sh.  If your job crashes in one of the scripts you do not have to run all the successful scripts again, but can start this script again after you made the corrections by submitting the appropriate command from the following list:

Code Block
languagebash
./subchain prep
./subchain conv2icon
./subchain icon noprep
./subchain arch
./subchain post


After successful completion of the  SP001 experiment adopt the scripts in  SP002 and start this experiment:

Code Block
languagebash
$ cd ${SPDIR}/chain/icon2icon/sp002
$ ./subchain start



...