Hi Yuan:
Yes, usually we only run maybe 10 sites at a time (or maybe it’s more than 10? I don’t remember now), and then it goes on to the next 10 sites, if enough cores are available to run on SLURM.
If some of the rose/cylc jobs for some of the sites fail, then you can retrigger the failed jobs in the GUI on the cylc1 node. You can even change the suite, and reload the suite and retrigger the failed jobs. Or you can retrigger successful jobs after changing and reloading the suite.
For the u-al752 suite, the running sequence for the jules subset of jobs doesn’t matter.
But you need to run the build/make job before the jules subset of jobs, and you need to run the make_plots after the jules subset of jobs is successfully finished.
As far as the plots go, it looks like the sample plot that we have there has a different set of sites than in your plot. So, the visual comparison only works for the first few sites.
Also, it looks like both the sample plot and your plot only have the daily JULES model curves in the plots (together with the monthly averages of the daily model curves). You might try to change the suite so that it will overlay the daily (and possibly monthly) FLUXNET observations on the plots. I don’t remember for sure right now if the following is possible, but you might only need to rerun the make_plots part of the suite after making this change.
You can create a copy of the u-al752 suite with rosie go, which will give you a new suite with a new ID number. And if you want to start the spinup with other dump files (or starting up with idealized parameters instead of dump files), then you can modify the new suite, and then use fcm to check in your changes to the suite to MOSRS, and then rerun the suite.
It’s normally ok to use other people’s suites, but I think you might want to contact the owner of the suite, to get their agreement before you start doing serious work with the suite.
Some of the suites have been vetted to some degree, but often the suites are not checked over very much.
You can keep using the JASMIN supercomputer to run JULES. I can best help you if you do your work with JULES on JASMIN. If you need extra space for managing your output, maybe the jules group workspace disk is not the best place on there to put lots of output. Your home directory is a good place there, but that has a 100GB limit per user, I think. But the home directory is backed up often. There’s a lot of space on the scratch disks, but that isn’t backed up, and files are deleted after 30 days anyways. If you need a lot of space on JASMIN, then maybe there is some other group workspace that we can get access for you to? Maybe U. Manchester has a group workspace there? Or maybe there is some other group workspace on JASMIN available?
Patrick