Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Install

Software Requirements

  • Fortran 90/95 and C compiler

  • MPI or OpenMP libraries

  • netCDF4 library linked with HDF5 and zip libraries and extended by the Fortran netCDF package (The netCDF4 package comes with the programs ncdump and nccopy)

  • UNIX utilities: make, ksh, uname, sed, awk, wget, etc.

  • For post-processing: Climate Data Operators (CDO) and netCDF Operators (NCO)

  • An ICON binary

CCLM_SP uses netCDF I/O of COSMO-CLM

The starter package is written for the HRLE–3 Mistral at DKRZ. Additional changes need to be applied on other machines. On the Mistral the Simple Linux Utility for Resource Management SLURM ist installed.
On mistral please use OPENMPI by setting

module load intel/17.0.2
module load openmpi/2.0.2p1_hpcx-intel14

export OMPI_MCA_pml=cm
export OMPI_MCA_mtl=mxm
export MCM_RDMA_PORTS=mlx5_0:1


To install the starter package, follow these steps:

1. Copy the starter package from RedC.
2. Unpack the starter package:

tar -xzvf cclm-sp-4.0.tgz
mv cclm-sp-4.0 yourpath/cclm-sp

3. Copy and unpack the supplementary data files for testing the starter package (the program wget needs to be installed on your system)

cd yourpath/cclm-sp/data
./get_sp_ext.sh

4. Change to the directory yourpath/cclm-sp/configure_scripts
5. Adjust the settings in the file system_settings to your computer system
6. Type the following command to create a default test experiment

./config.sh

this first compiles the necessary cfu program and the fortran-csv-lib, and then creates the test experiments ${SPDIR}/chain/gcm2cclm/sp001 and ${SPDIR}/chain/cclm2cclm/sp002.
or, if you want to create a default experiment including add-ons:

./config.sh -a addon1,addon2,…


1 Get the source code

Goto https://redc.clm-community.eu/projects/icon-clm-starter-package/wiki and download the latest revision, i.e. tag (e.g. 1.0 in the following), as tarball, copy spice-v1.0.tar.gz to your computing system and proceed like so:

$ tar -xvf spice-v1.0.tar.gz
$ cd spice-v1.0
$ SPDIR=$PWD # used as a shortcut in the following


2 Get supplementary data 

2.1 Get the example constant and external data files

$ cd ${SPDIR}/data
$ ./get_spice_rcm.sh

A directory rcm is created holding the necessary data to run the ICON-CLM test experiment.

3 Configure SPICE  and run the test examples


 Configuration at DKRZ or DWD

Call the script config.sh  at DKRZ like so

$ cd ${SPDIR}/configure_scripts
$ ./config.sh -s dkrz

or at DWD like so

$ cd ${SPDIR}/configure_scripts
$ ./config.sh -s dwd

This will create two directories including the basic scripts. You find them under ${SPDIR}/chain/gcm2icon/sp001 and ${SPDIR}/chain/icon2icon/sp002 .

Run the test examples

There are two tests, one for testing ICON with GCM or reanalysis data as initial and boundary conditions (${SPDIR}/chain/gcm2icon/sp001) and one for testing ICON with coarse grid ICON data as initial and boundary conditions (${SPDIR}/chain/icon2icon/sp002). Actually sp001 creates the necessary input data for sp002.

Before you start the experiment look for the following environment variables in the job_settings of sp001 and sp002 file and adopt them to your needs.

PROJECT_ACCOUNT=  # your project account
EMAIL_ADDRESS=    # your email address if you want to get information when your job crashes or finishes
BINARY_ICON=      # ICON executable including full path
ECRADDIR=         # path to the ECRAD data directory, if you plan to use the ECRAD radiation scheme

If you are not running the tests at DKRZ:

Adopt the input directory of the ERAInterim data

GCM_DATADIR=/pool/data/CCLM/reanalyses/ERAInterim

and probably the de-tar part in prep.job.sh.

Now you should be ready to start the first experiment:

$ cd ${SPDIR}/chain/gcm2icon/sp001
$ ./subchain start

This experiment is a two month simulation, 50 km / Europe / driven bei ERAInterim

After successful completion start the second one:

$ cd ${SPDIR}/chain/icon2icon/sp002
$ ./subchain start

This experiment is a two month simulation, 3km / region around Hamburg / driven by ICON output of sp001 . 


 Configuration on a not supported computing platform

3.1  Create the supplemental programs

a. Create the fortran library libcsv

This library is used for reading csv data files. The original source code can be found on GitHub under https://github.com/jacobwilliams/fortran-csv-module

Choose a Fopts file in the directory LOCAL and copy it to the base directory of libcsv (here we choose Fopts.dkrz as an example):

$ cd ${SPDIR}/src/fortran-csv-lib
$ cp LOCAL/Fopts.dkrz Fopts

Adopt the Fopts file to your system and type:

$ make

After successful compilation you find the libcsv in ${SPDIR}/src/fortran-csv-lib/lib.

b. Create the cfu executable

The climate fortran utilities contain several functions needed in the runtime environment.

Choose a Fopts file in the directory LOCAL and copy it to the base directory of libcsv (here we choose Fopts.dkrz as an example):

$ cd ${SPDIR}/src/cfu
$ cp LOCAL/Fopts.dkrz Fopts

Adopt the Fopts file to your system and type:

$ make

After successful compilation you find the cfu executable in ${SPDIR}/src/cfu/bin

c. Create additional conversion programs

The programs are used to convert COSMO-CLM caf-files to ICON-CLM compatible caf-file (ccaf2icaf) and to correct the netCDF output of ICON-CLM (correct_cf).

Choose a Fopts file in the directory LOCAL and copy it to the base directory of libcsv (here we choose Fopts.dkrz as an example):

$ cd ${SPDIR}/src/utils
$ cp LOCAL/Fopts.dkrz Fopts

Adopt the Fopts file to your system and type:

$ make

After successful compilation you find the executables ccaf2icaf and correct_cf in ${SPDIR}/src/utils/bin

3.2  Configure SPICE on your computing system

If you intent not to run ICON-CLM at DKRZ or DWD you have to perform some adoptions. First,  find out which batch system comes nearest to your system. DKRZ uses SLURM (i.e. SBATCH commands) and DWD uses the Portable Batch Commands (i.e. PBS commands). Lets suppose as an example in the following that you use SLURM on your computing system and therefore you use "dkrz" as a template.

a. Change into the configure_scripts directory:

$ cd ${SPDIR}/configure_scripts

b. Adopt the dkrz part in the system_settings.tmpl file to the settings on your system.

c. Run the config.sh script:

$ cd ${SPDIR}/configure_scripts
$ ./config.sh -s dkrz

This will create two directories including the basic scripts. You find them under ${SPDIR}/chain/gcm2icon/sp001 and ${SPDIR}/chain/icon2icon/sp002

Now comes the hardest part: you have to dive into the scripts in the directories and adopt the scripts to your system (e.g. modify batch commands, program calls etc.). Start with adopting the experiment ${SPDIR}/chain/gcm2icon/sp001  . 

Run the test examples

There are two tests, one for testing ICON with GCM or reanalysis data as initial and boundary conditions (${SPDIR}/chain/gcm2icon/sp001) and one for testing ICON with coarse grid ICON data as initial and boundary conditions (${SPDIR}/chain/icon2icon/sp002). Actually sp001 creates the necessary input data for sp002.

SP001  is a two month simulation, 50 km / Europe / driven bei ERAInterim

SP002 This experiment is a two month simulation, 3km / region around Hamburg / driven by ICON output of sp001 . 

Before you start the experiment look for the following environment variables in the job_settings of and adopt them to your needs.

PROJECT_ACCOUNT=  # your project account
EMAIL_ADDRESS=    # your email address if you want to get information when your job crashes or finishes
BINARY_ICON=      # ICON executable including full path
ECRADDIR=         # path to the ECRAD data directory, if you plan to use the ECRAD radiation scheme

Adopt the input directory of the ERAInterim data

GCM_DATADIR=/pool/data/CCLM/reanalyses/ERAInterim

and probably the de-tar part in prep.job.sh.

Now you should be ready to start the first experiment:

$ cd ${SPDIR}/chain/gcm2icon/sp001
$ ./subchain start

In SPICE the scripts are called in the order prep.job.sh, conv2icon.job.sh, icon.job.sh, arch.job.sh, post.job.sh.  If your job crashes in one of the scripts you do not have to run all the successful scripts again, but can start this script again after you made the corrections by submitting the appropriate command from the following list:

./subchain prep
./subchain conv2icon
./subchain icon noprep
./subchain arch
./subchain post


After successful completion of the  SP001 experiment adopt the scripts in  SP002 and start this experiment:

$ cd ${SPDIR}/chain/icon2icon/sp002
$ ./subchain start





  • No labels