In order to run the global JULES model (i.e. suite u-as052 or its variants) on CEDA JASMIN, follow these steps:
- You will need ncas_generic group workspace privilege to run this suite.
ssh -AX jasmin-cylc.ceda.ac.uk
- Check out the suite (or a copy of the suite) with
- Edit your file suite.rc, so that in the 4 places that try to write log files, the string ‘pmcguire’ is replaced by your username.
job.outfiles are redirected in this suite to the scratch disk. The Cylc GUI will therefore not find the log files, and you will need to create and to later look at this directory on the scratch drive, in order to see the log files:
mkdir -p /work/scratch/YourUsername/logs
- You’re starting from an idealized state, so make sure the corresponding lines in
[namelist:jules_spinup]sections are uncommented and that the lines for the starting from a dump file are commented.
~/roses/u-as052/app/jules/rose-app.confso that the
[namelist:jules_output]section points to
mkdir -p /work/scratch/YourUsername/config/outputs
- This suite uses the ‘old soil ancillary file’, which is named here with the substring ‘_NEW2’. This is bit compatible with the newest version of the file that doesn’t have this substring in the file name. The ‘new soil ancillary file’ has the substring ‘_NEW2b’, which means that it is for ‘BCJ’ runs (Brooks & Corey parameters, Rawls & Brakensiek PTF, produced in Juelich), and this corresponds to one of the two changes to suite u-as052 that we have saved in suite u-aw198. You can see this ‘file’ setting in
~/roses/u-as052/app/jules/rose-app.conf, in the
- The u-as052 suite is set by default to run as a Northern Hemisphere (NH) run.And it is also set by default not to use ‘land-only 1D compression’.You can change these by modifying the jules_model_grid namelist elements in
~/roses/u-as052/app/jules/rose-app.conf(commenting one block and uncommenting the other). These should be the uncommented lines for a global land_only-compression run:
rose suite-runat the command prompt in the directory
- This should start the Rose/Cylc GUI, and it will run a spin up for 35 years of 1979 before starting the main run from 1979-2012. CEDA JASMIN limits the running to 48 hours of CPU time which might only cover 20-30 years of spinup or so, so after 48 hours or so, if all went well, you can log back in to jasmin-cylc, and restart from the last dump file. You can restart this by editing
[namelist:jules_spinup]sections so that the idealized state lines are commented out and the ‘start from dump file’ are uncommented. If the last dump file has finished spinup, then make sure max_spinup_cycles is set to 0, otherwise make sure that max_spinup_cycles is decremented to correspond to the remaining number of spinup cycles that are desired. You would need to change the dump file name in jules_initial to correspond to the last dump file in your output_dir. If the last dump file is after the spinup has finished, you will also have to adjust
[namelist:jules_time]to correspond to the new starting time of the run. You might want to move the old spinup files to another place, as they will be overwritten by new spinup files. The instructions for this step are particularly important if you are increasing the number of spinup years from 35 to 100. I haven’t tested the ‘restart’ command-line option in Rose/Cylc for this suite.
- When the suite is finished after any necessary restarts, it might have taken up to two weeks. You can inspect the output NETCDF files in output_dir, for example with
ncview). You might want to archive this scratch data somewhere, though the ncas_generic group workspace is rather full right now; you might have access to another group workspace someplace else. You can run some of the CDO scripts and Python plotting routines described and available on the animation and time-series pages of this website.