I am running the regional nesting suite on Monsoon and have set it up to output timeseries information at a point (so I can better compare to observations). It runs fine and outputs the timeseries to a fields file but the conversion to pp fails so I have to archive the fields file rather than a pp. This means that I lose some of the metadata which makes analysis tricky as I hope to run the nesting suite for a whole year.
If I run mule-convpp on one of the fields files then I get the following error: skipping field validation due to irregular lbcode and just includes the first hour of the first timeseries variable.
My suite id is u-dg348 and the file where I request the time series is here: /home/d03/nahav/roses/u-dg348/app/um/opt/rose-app-stashpack6.conf
Any ideas on a way round this are most welcome!
Natalie
please point to a file that doesn’t convert.
Grenville
Sorry - here’s an example:
/home/d03/nahav/cylc-run/u-dg348/share/cycle/20230612T1800Z/UK/ukv_ITE/RAL3P2_MURK_MORUSES/um/umnsaa_roissy000
Hi Natalie
um-convpp seemed to work
$UMDIR/vn11.2/xc40/utilities/um-convpp umnsaa_roissy000 ~/umnsaa_roissy000.pp
Any good?
Thanks Grenville.
When you run the suite it seems to do to the conversion and create a pp file but when you look in it using xconv it only has the first hours worth of timesteps for the first timeseries variable.
I had to dig quite deep to find the error message skipping field validation due to irregular lbcode in the log files
Also, the archiving to mass doesn’t like this converted pp file.
If your converted pp file does have all the timesteps and variables in it then that is great! I guess I will just need to make sure I use the same version of um-convpp as you
Natalie
well, xconv shows all time steps and variables seem OK (when converted to netcdf to inspect)
Well that is good news! Let me see if I can point the nesting suite at that version of um-convpp
…
I now have pp files which have all the variables and time steps in - thank you!
Unfortunately iris won’t load these files. I get a ValueError: Unknown IB value for extra data: 0
If I convert the pp to netcdf using /projects/um1/Xconv/xconv1.94 then iris will load the netcdf fine. When I call a convsh script to convert the pp files from umpp then I get the following error: *Error can only extract data with dimensions 1 4 *
Requested dimensions are 1 2
Error in writefile
- while executing*
“writefile $outformat $outfile $fieldlist”
- (“foreach” body line 10)*
- invoked from within*
"foreach infile $argv {
# Replace input file extension with .nc to get output filename
- set outfile [file tail [file rootname $infile].nc]*
# Read…"
- (file “/home/d03/nahav/roses/u-dg348/bin/pp_to_nc_timeseries.tcl” line 22)*
I am using /projects/um1/Xconv/convsh/. Am I just using the wrong version as manually converting in xconv seems fine?
Thanks for your help in advance!
Natalie
I have found a version of convsh that now works.
Thanks for your help with this.
Natalie
Glad you have got this working - between us, we should alert the IRIS team that it doesn’t appear to handle lbcode properly.
Happy to contact the iris team and report this. Do you have an email address for them?
Natalie
Hi there, reopening this as I have another couple of related issues.
-
I have now added the timeseries functionality to the 300m model in my nest which has a timestep of 12s. The times are in the output timeseries are only given to the nearest minute (in units of days since the start of the run). This means that several points have the same time. Any ideas how to get the UM to output a more user friendly time? (I didn’t have this trouble before in my testing as I was using the UKV which has a timestep of a minute).
Example file can be found here: /home/d03/nahav/cylc-run/u-dn104/share/cycle/20230717T0000Z/UK/wmv_CCIv2/RAL3P2_DSMURK_DSSOIL_MORUSES/um/umnsaa_chil_300000.nc
-
I convert the timeseries files from .pp to .nc before archiving them. In the 300m timeseries files the nc files seem to have a couple of rows of data missing at the end of the timeseries (the array has the correct dimensions but the last two times are filled with a very large dummy value). I have checked and the associated pp files do not have this problem. The change skip seems to happen at the end of the first hour (0.04167 in the time units in the file). Any idea what is going on here?
Example files here: /home/d03/nahav/cylc-run/u-dn104/share/cycle/20230717T0000Z/UK/wmv_CCIv2/RAL3P2_DSMURK_DSSOIL_MORUSES/um/umnsaa_chil_300000.nc and umnsaa_chil_300000.pp
Thanks for any thoughts in advance!
Natalie
The second problem is solved by outputting timeseries files more frequently (hourly instead of 6 hourly) but I still can’t find a solution within the UM to the first problem (I have just created a new time coordinate in my analysis code for now).
Any ideas on this gratefully received!
Natalie
Natalie
In the rose gui, right click on Model Input and Output and select Help for an explanation of file naming.You can name the files by timestep for example, that would ensure separate files.
Grenville
Hi Grenville,
The filenaming is fine for what I need and I don’t really want a file for every timestep for the 300m/100m which have timesteps of 12 and 4s. It is the times within the file which are to the nearest minute which means several timesteps have the same times in the file which iris objects to!
Best wishes,
Natalie
Ah, sorry, misread/understood. This now rings a bell from CASCADE all those years ago. Needs more thought
Grenville
Yep - I haven’t managed to find an obvious answer.
Who worked on that? Steve Woolnough and Chris Holloway?
Natalie
yes, Steve and Chris, but maybe a MO person would be better - Humphrey Lean, Peter Clarke, Kirsty Hanley?