Adding layered-canopy output variables to existing JULES suite

Hi Natalie,

Can you try dropping Patrick’s JULES version into your suite:
JULES_FCM='fcm:jules.x_br/dev/patrickmcguire/r25125_vn5.2_layeredCanopyTwoStream'

And add the following variables new for output: ‘gpp_lyr’, ‘apar_lyr’, ‘cmpf_lyr’

n.b. also needs fsmc if you’re not already outputting that.

Patrick’s suite, in case you want to take a look, is here (on JASMIN):
~pmcguire/roses/u-da046j_UK

Note it uses can_rad_mod=4

Can discuss on Friday if you like.

Tristan

Hi Natalie:

I’d like to clarify that that JULES branch was branched from JULES trunk version 5.2.

And for that JULES branch in MOSRS, it’s revision 26782, and it should work as advertized at that revision. But I will continue to modify that branch which would increase the revision number, so if it’s not working for you in the future, you might revert to revision 26782.

Patrick

Hi both,

Sure, I will give that a go tomorrow. I foresee that I will have some questions but I’ll start by changing the ‘JULES_FCM’ command as stated, making sure I have my version numbers are set to ‘26782’ and ensuring my rose-app.conf file is compatible and see how I go.

I’ll also have a look at Patrick’s suite - it might turn out to be easier if I checkout a copy and set it up for SIFMIP. Will let you know when I know more.

Natalie

Hi Natalie,

Note that Patrick’s suite is a gridded run. It might make sense to grab some of the nml files with the parameters in (e.g. pft_params) but I would retain the following from your existing suite:

drive.nml
timesteps.nml
model_grid.nml
ancillaries.nml
prescribed_data.nml
initial_conditions.nml

Hope that helps,

Tristan.

Hi both,
I’ve had some success with this today - my SIFMIP suite runs with Patrick’s version or at least it appears to. It even seems to complete spin-up and then I run into this FATAL ERROR:
WARNING: next time: model failed to spinup: continuing with main run
[FATAL ERROR] Map from land: input data must be on land points

Note entirely sure where to go from here as I would imagine that it wouldn’t spin-up if it was a namelist problem??

Natalie

Hi Natalie,

The fatal error isn’t the spin up though - that’s just a warning, and you can probably get rid of it by extending the number of years that you spin-up for.

The reason it’s not running is because of the “input data must be on land points” error. Not sure what’s going on, but my guess is that somewhere it’s trying to read a gridded data set, possibly because my list of nml files wasn’t exactly right. You could try looking to see if a netcdf file is listed in any of the nml files you copied over from Patrick.

Cheers,

Tristan.

Hi ya,

Sorry, I wasn’t clear - spinup isn’t the problem as it seems to do it but I don’t know why it would if it isn’t happy doing the main run. I’m fairly sure I’ve set the correct input and model run grids and all the correct ancillary information/namelists etc. but entirely likely I’ve missed something.

Natalie

Hi Natalie:

It might be easier to merge things the other way around. Just use your setup, but:

  1. replace the jules branch with mine (this assumes you’re already using jules version 5.2, though it looks like my meta data is from jules version 5.3)

  2. switch to can_rad_mod=4 (or maybe 6)

  3. add the output layered variables something like this:

[namelist:jules_output]
dump_period=10
nprofiles=5 #(change this from i.e., 3 to 5)
output_dir='$OUTPUT_FOLDER'
run_id='$ID_STEM'

[namelist:jules_output_profile(4)]
file_period=-2
nvars=8
output_main_run=.true.
output_spinup=.false.
output_type=8*'S'
profile_name='sif_vars'
var='gpp', 'gpp_gb', 'fapar', 'gpp_lyr', 'apar_lyr', 'cmpf_lyr', 'lai', 'frac'
var_name=''

[namelist:jules_output_profile(5)]
file_period=-2
nvars=8
output_main_run=.true.
output_spinup=.false.
output_period=-1
output_type=8*'M'
profile_name='Monthly_sif_vars'
var='gpp', 'gpp_gb', 'fapar', 'gpp_lyr', 'apar_lyr', 'cmpf_lyr', 'lai', 'frac'
var_name=''

Patrick

Hi both,

The suite I have made to run Patrick’s version at the SIFMIP sites is here:
/home/users/ndouglas/roses/MultiSite-PMcGuire-SIFMIP

It’s currently flagging up the fatal error: map_from_land: Input data must be on land points. I just cannot see why as it performs the spinup.

Patrick - I’m going to see if I can get it to run for Alice Holt using chess drivers etc next week but it might be worth having a chat about what I’ve done so far. Would you have any time next week for this, please?

Thanks!

Natalie

Hi Natalie:
What suite is this suite directly derived from? I wanted to see the differences, but it’s not under version control and I don’t know which changes you made to get this there. Is this suite derived from a JULES5.2 suite? If not, you might need to adapt the suite.

Patrick

Hi Patrick,
My suite is a copy of…
description=Copy of u-ch981/trunk@207312

Natalie

Hi Natalie:

I doubt if you made all those changes to ~/douglas/roses/MultiSite-PMcGuire-SIFMIP without having some intermediary version between it and u-ch981/trunk@207312 .

Do you have a working intermediary version somewhere? (i.e., just using a different branch of JULES?)

I made a copy of u-ch981/trunk@207312 as u-da664.

And I changed the jules branch to the L2SM capable one, and it starts to run, but it then fails because it is a JULES 4.9 suite instead of a JULES 5.2 suite:
[FATAL ERROR] init_lsm: Error opening namelist file jules_lsm_switch.nml (IOSTAT=29 IOMSG=file

I have also made a copy of ~ndouglas/roses/MultiSite-PMcGuire-SIFMIP as instead ~pmcguire/roses/MultiSite-PMcGuire-SIFMIP, and I get the same error that you get with your copy.

I am guessing that you tried to modify ~ndouglas/roses/MultiSite-vn6.2-SIFMIP (which is a JULES 6.2 trunk suite?) to be ~pmcguire/roses/MultiSite-PMcGuire-SIFMIP (which should be a JULES 5.2 suite, using my branch). But when I do a recursive diff:
diff -r ~ndouglas/roses/MultiSite-vn6.2-SIFMIP/ ~ndouglas/roses/MultiSite-PMcGuire-SIFMIP/

there are a lot of changes, so I don’t really know where you got ~ndouglas/roses/MultiSite-PMcGuire-SIFMIP/ directly from.

I tried to comment out the two new output profiles, but I still get the same error.
I then commented out all 4 output profiles, and it seemed to run for a while without crashing.
Then, I tried to run with only the 1st output profile, and it still crashes with the same error.
Then, I tried to run with only the last 2 output profiles (the new ones), and it seemed to run for a while without crashing!

So this probably means that one (or more) of the old variables in one of the 1st two output profiles is there in the suite, but that this variable isn’t available for JULES5.2?

You could just try to figure out manually (by a process of elimination) which of the variables is causing problems.

It might be easier for you to modify u-ch981/trunk@207312, which is a JULES4.9 trunk suite to a JULES5.2 trunk suite, and then modify that to a JULES5.2 branch suite (using my 5.2 branch of JULES).

It’s probably not that hard to change things from JULES 4.9 trunk to JULES 5.2 trunk, whereas going backwards from JULES 6.2 might be more difficult.

You could possibly make the required changes from 4.9 to 5.2 manually, but it might be better to use the JULES upgrade macros in rosie edit, if they are properly working.

Patrick

Hi Patrick,
I see! Yes, I modified my suite ~ndouglas/roses/MultiSite-vn6.2-SIFMIP to get ~ndouglas/roses/MultiSite-PMcGuire-SIFMIP but the original suite I copied from the repository was u-ch981. This was because it is a suite that uses ASCII files but I would have made a lot of changes to create a SIFMIP suite.
The changes between MultiSite-vn6.2-SIFMIP and MultiSite-PMcGuire-SIFMIP are mostly in the rose-app.conf. I copied many of the namelists from your ~pmcguire/roses/u-da046j_UK keeping the namelists as Tristan advised above (and some others).
I did not expect the output profiles to be the problem given the ‘[FATAL ERROR] Map from land: input data must be on land points’ but this might explain why spinup appears to complete.
This is something I will try next … when JASMIN is back up.
Thanks for your help and I will keep you posted.
Natalie

Hi Natalie:
Maybe JASMIN is still working for the moment. The cylc1 node and sci2 are both working, but sci3 is not. And MOSRS on cylc1 is working, so rose/cylc suites can still work.

Good luck with getting the output profiles working! Hopefully that was the cause of the problem.
Patrick

Hi again Natalie:
I just ran the ~pmcguire/roses/MultiSite-PMcGuire-SIFMIP suite again on JASMIN. This time, I only ran with output_profile number 4 (monthly layered-canopy variables), since I thought this would run faster than running with both output_profile number 3 & 4. I skipped output_profiles 1 & 2, because they caused problems before. This time, I was patient and let jules run to completion, which only took a few minutes. The 3 jules jobs ran just fine!!! The make_plots app crashed, but that’s a different problem.

I should note, especially if you’re going to be running lots of these jobs or if you’re going to run multiple jules apps per job in parallel, that you’re running jules interactively in the background on the cylc1 VM. The JASMIN rules are that we’re not really supposed to be running jobs interactively on the VMs either for a long time or using parallel processing. It’s advised that the LOTUS/SLURM batch modes be used instead. You could submit a bunch of these jobs to the short-serial-4hr queue, for example. It does take some time for queueing, but this way we don’t overwhelm the VM’s. It’s ok to do some very-limited testing interactively in the background, but maybe not doing parallel processing in that testing.

Patrick

Hi Natalie:
I just spoke with Tristan. He wants me to ask you to output the canopy-layered variable ej_lyr for the electron transport variable j, the unlayered variable tstar, and the gridbox-average variable tstar_gb in your suite. The first two variables are indexed by PFT (plant functional type).

You can find the new JULES source code that outputs ej_lyr in the latest revision of the branch you are currently using:
JULES_FCM='fcm:jules.x_br/dev/patrickmcguire/r25125_vn5.2_layeredCanopyTwoStream'.
It looks like the latest revision is 27004, if you need that number.

I have a suite that does this output, which is: ~pmcguire/roses/u-da046k_UK.
And there is example output for the year 1860 for the UK region in ~pmcguire/for_tristan/. The original copy of that output data is in /work/scratch-pw2/pmcguire/u-da046k_UK.

You can view that example output data with:
ncview ~/for_tristan/JULES-GL7.0.vn5.2_sellers.CRUNCEPv7SLURM.S2.sif_vars.1860.nc
and
ncview ~/for_tristan/JULES-GL7.0.vn5.2_sellers.CRUNCEPv7SLURM.S2.Monthly_sif_vars.1860.nc

The next step for me to do at some point is to add wlite_lyr as an output variable.
Patrick

Hi Patrick,
Thanks for the advice. It turns out that it was the output profiles that were the problem. Do you know why the FATAL ERROR refers to land points in this case?
I will look into changing to LOTUS/SLURM batch modes and looking at the new variables to output from the new version.
Thanks!
Natalie

Hi Natalie:
No I don’t know why the output profiles’ setup caused a problem with land points. I would guess that to compute one of the disabled output profiles, it needed something that needed the input data on land points. It would need to be isolated to one particular variable and then debugged.
Patrick

Hi Patrick,
JULES, using revision number 27124, doesn’t seem to recognise ‘wlite_lay’ as an output variable. I also tried ‘wlite_pft_lay’, ‘wlite_lyr’, ‘wlight_lyr’ and other variations with no luck. Am I missing something? I’m also having trouble switching to batch = slurm with the following directives:

—time = 4:00:00

—ntasks = 8

—partition = short-serial-4hr

—account = short4hr

—constraint = ‘amd’

It doesn’t like ‘amd’ and when I switch to ‘intel’ submit fails completely. Any ideas?
Many thanks,
Natalie

Hi Natalie:
There aren’t any intel nodes on short-serial-4hr.

I spelled it ‘wlight_lyr’, to try to use the proper spelling for ‘light’.

Patrick