Hi, I’m trying to get suite u-da362 running with some altered initial conditions. I have been trying to do this by modifying the start dump; however even just reading in the start dump with mule and then saving it again (requiring removing the v-wind from the start dump in order to save successfully) and replacing the start dump with that saved file, the model fails with the following error:
When I load the startdump with Mule, I get this error:
Field validation failures:
Fields (80,81,82,83,84, … 81 total fields)
Field grid latitudes inconsistent (STASH grid: 19)
File : 100 points from 0.0, spacing 10000.0
Field (Expected): 101 points from 0.0, spacing 10000.0
Field (Lookup) : 100 points from 0.0, spacing 10000.0
warnings.warn(msg)
The 81 fields are V at all 80 model levels (STASH code 3), then surface meridional current (STASH code 29). The start dump is that produced by the UM in reconfiguration, and the model runs fine with that start dump. Therefore I presumed the error was with Mule reading in the data, rather than with the data itself.
For further info: I’m trying to modify the .astart file directly after reconfiguration because trying to add my fields during reconfiguration gave a different error. In that case, task UM_recon_10km fails with:
???
???!!!???!!!???!!!???!!!???!!! ERROR ???!!!???!!!???!!!???!!!???!!!
? Error code: 30
? Error from routine: RCF_INTERPOLATE
? Error message: Conversion from Logical to Integer: Illegal value
? Error from processor: 0
? Error number: 4
???
Thanks for the swift reply. The code I’m using to read the data in and output again is:
data = mule.DumpFile.from_file(startdump)
# save unprocessed file for testing purposes
data_prune = data.copy()
for field in data.fields:
# ignore fields 3, 29 which apparently have too many points in y-dirn?
if field.lbrel in (2,3) and field.lbuser4 not in (3,29):
data_prune.fields.append(field)
data_prune.to_file(startdump+"_MULE_TEST")
where startdump is the file /work/n02/n02/dship/cylc-run/u-da362/share/data/history/da362_10km.astart, produced by the task UM_recon_10km. The model runs fine with this unaltered startdump, however I want to set initial conditions that vary in the horizontal (which is not possible with “l_init_idealised” set to .true., as far as I can work out).
The dump looks somewhat odd, grid spacing of 5000. and constant data values, but I’m guessing this is because it’s idealised?
It looks as if the dump is failing MULE validation as it isn’t set up to process idealised dumps.
Is the dump on a Arakawa A grid rather than a C one?
MULE includes an option to read in dumps created from ECMWF GRIB so I tried that, but that also failed.
I might be able to hack things about so that the validation isn’t automatically run when the dump is written.
Thanks for looking into this for me. The grid spacing and constant data values are indeed because it’s an idealised run (on a flat Cartesian grid with spacing 5000m).
Ah, that would explain it! I think it should still be on a C grid though (A grid means co-located, right?), as the model output is definitely staggered in the way I was expecting. I wonder if the “extra” point is missing because the boundaries are cyclic in both horizontal directions, and therefore the “extra” point isn’t actually needed?
Let me know if there’s anything else I can do to help, or if there’s another utility I could use to create the initial fields (I tried Xancil to create ancils and that also had issues).
I’ve investigated further, and the MULE validation appears not to be set up for NS cyclical boundaries.
However it’s possible to skip the part of the code with does the validation before writing out a file:
startdump="/work/n02/n02/simon/test/da362_10km.astart"
data = mule.DumpFile.from_file(startdump)
# save unprocessed file for testing purposes skipping validation
output_file=open(startdump+"_MULE_TEST", 'wb')
data._write_to_file(output_file)
This has the same functionality as to_file but without the validation step.
Let me know if this helps,
Thanks – now the model reads in the startdump OK! Including the startdump with modified theta. However, the modified theta seems to disappear in the first timestep – is the idealised UM immediately over-writing my latitudinally-varying theta for some reason? I thought that the horizontally-homogeneous theta (and also u,v, mixing ratio) were enforced during idealised reconfiguration only – have I misunderstood this?
Basically what I’m trying to do is set up an initial state that is in thermal wind balance with a constant vertical wind shear du/dz throughout the troposphere. I can use the idealized initialization to set du/dz = const, but this implies a d(theta)/dy (I’m aware that this will cause a jump at the N/S boundary but it doesn’t matter on the timescales I’m interested in). In addition I want to force a line of convection to develop by adding a warm strip aligned along the x-direction throughout the depth of the boundary layer; my initial conditions in cylc-run/u-da362/share/data/history/da362_10km.astart_TWB_BL_warm_strip show this desired initial theta field. Have you any idea how I can get the model to recognise it?
I’ve a look at your config, and I think the option to use the data from
the start dump should
appear in the GUI under the Initial State window. Looking at the UMDP https://code.metoffice.gov.uk/doc/um/latest/papers/umdp_036.pdf it appears
then tprofile_number=10 is the namelist variable required, however this
is deactivated in
your conf file. Inferring from the rose-meta.conf file the only way
activate tprofile_number
is to have run_dyntest:problem_number set to 2 or 5 in dynamics testing
which corresponds
to running a dynamical core or idealise planet, which I’m guessing
you’re not after.
However, there are a number of options available if you turn on
l_init_initialised, are these of use?
I need to run a cyclic configuration, so changing run_dyntest:problem_number to 2 or 5 isn’t possible.
I’ve been running with l_init_idealised set to True to generate the reconfigured startdump in the first place; I’ve then tried running the model with either l_init_idealised = true or false and either way the model overwrites my initial fields, despite me not running recon again. I can see in the model output that the correct initial condition is read in, because the fields at time 0730 are identical to those set manually in the startdump. However these fields disappear by the next output time (0733). I don’t understand how/why the idealised model is doing this?
I’ve also tried specifying the theta, exner, and mixing ratio initial fields via ancillary files in the reconfiguration, but this generated other errors (hence trying to modify the start dump directly).
OK, you obviously have explored the various options.
Unfortunately all I know about the Idealised model is what I’ve
read in the documentation and inferred from the GUI.
I’m not really in a position to advice if what you
are seeing is expected, some configuration issue or a bug.
Have you tried contacting the code owners at the Met Office?
These are listed as Rachel Stratton and Carol Halliwell (who
is part of MetOfice@Reading).