-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add example simulations #39
Comments
I will update the example simulation with new namoptions for flow rate forcing and check the neutral simulations. @samoliverowens or @ivosuter can you take care of a full EB example? Further examples can be added later on. |
I opened up a new branch for the example simulations: https://github.com/uDALES/u-dales/tree/bss/example-simulations.
|
I've just uploaded a set of example simulations differing by geometry and whether the energy balance is on. The rest of the parameters don't correspond to any particular setup yet, and are meant to be changed.
|
Great! With the cleaned-up namoptions it will be easier to set up nicely the different options. There are only a few parameters we need to set for every simulation, the others then depend on the setup. Here is a minimal list, example for neutral simulation with volume flow-rate forcing: &RUN &OUTPUT &DOMAIN &BC (these are needed for the bottom subroutine!) &WALLS &PHYSICS &DYNAMICS &NAMSUBGRID |
@samoliverowens I moved your documentation on the boundary conditions to this branch, because I think it is a great start for guidelines on how to set up simulation-parameters. I also added the neutral simulation described above as example setup 503, and a file |
@samoliverowens I plotted the layouts and it seems that there is an error in example 300: there is one floor facet that overlaps with the last block (41 56 57 64 1 16): |
My suggestion for example setups:
What do you think about that? We can of course always add more features to it, this is just a start to get the geometry and basic switches set up. |
I agree that's not what we want, though I just checked and it matches what the old pre-processing routines produce, so it looks like a problem with how |
That sounds good, and with 5. we should have temperature & buoyancy on too. |
I see the benefit of having a limited number of simulations of increasing complexity as opposed to a comprehensive set of them that systematically show the user all options (this would result in a lot of simulations as there are a lot of variables - see below). I think to do it like this we need to create some kind of table/ documentation for these simulations where it is v. clear to the user what is changing across them. The main variables being:
Other variables being:
The above set of 5 simulations can provide a good coverage of these main variables. I would suggest we do not number them like this if the numbering does not have some systematic meaning. We could come up with some simple system (e.g. 0xx - neutral, 1xx non-neutral, 2xx energy balance, 5xx driver sims, 8xx trees etc.). To add some specificity and my recommendations to the above:
Vary domain sizes and fielddump/ statsdump outputs across all of these.
This does not cover all the variables discussed and ideally I think these examples should be set-up systematically by varying the code functionalities around one simple base case. But this would require a large number of example simulations that may also be messy so I think something like this sounds good. Thoughts on this? I should be able to set all these up pretty quick? Would we want to remove what is currently in bss/example-simulations and only have the above? |
I think this would be really good. 👍 @tomgrylls Are all the above a super set of the current tests. If this is the case, we could simply run tests based on these name lists thus making removing duplication and making testing easier to maintain. Any thoughts? |
The tests would be more thorough based on the above - but this would require running quite a few more separate simulations each time in testing? We could base the tests off of a limited number of these e.g. It depends slightly on the size of the domains/ resolutions that we set in the above. If these are relatively small for the chosen example simulations then all that would be necessary is to use the same input files but with |
@bss116 @samoliverowens I will make a start on updating the existing example simulation namoptions files in bss/example-simulations to work with the namoptions restructuring. And, as I do this, try and produce the above set of simulations. Let me know of any comments/ other things you would like to be done. |
I think all of the above sounds really good, yes feel free to remove the current example simulations, they were just for initial ideas. I wouldn't worry about the naming of the simulations too much, I'd simply call them 001 -- 006. We will write a separate document (maybe part of the setting-up simulations?) where we give an overview of the setup and required parameters/switches. Personally I wouldn't include a flat simulation in the examples, as this is not a common case for using uDALES (but of course keeping the functionality of it, as it may be used as additional control case). For the domain size, I suggest to keep at least one of them relatively small (maybe the neutral one with cubes) such that this can be used as a quick example/debug simulation. |
What if we keep the size unchanged and instead run for a very short time? Would that be enough to capture some meaningful data that we can use to benchmark one uDALES version with another? If that is the case, the we can simply patch the namelist programmatically before ruining uDALES. We can do that for the domain size of course but if I remember correctly there is a bit more to it then just changing a couple of vars on the namelist!? |
@dmey how many cores do we run on the CI? We are never going to have a fully developed simulation as a test unless we use warmstart files (which could be a good idea to test the model in the state we are interested in - continuous, fully turbulent). If we are not going to do this then seeing as we will unlikely allow it to fully develop we may as well run simulations for small periods e.g. 5 seconds. Even with these short run times we will still want fairly small numbers of cells ~1000000 or less as computation time rises rapidly especially if we are limited to few cores. I have made a start on the above set of example simulations. I have made For |
2 but just to clarify in case this is not clear, the current tests (integration and regression tests) are not designed to tell us if the model is sound but to simply check if we introduce a change that may impact the simulation results from one version/branch to another. If you introduce a change in a new commit, we check against master that that new commit has not changed the results -- obviously, given the large number of switches in the name list, one would need to run the model for all permutations and check all outputs quantities from uDALES to be sure. Here we take a more pragmatic approach and only run the model for a set of name list and check for outputs we deem important. I think it's file to run short simulations if we are looking for equality between two uDALES version (assuming that the switches for the extra parametrizations are active for that short time period). I do not think it's currently feasible in the current set up to investigate what effect changes will have after n integration time steps.
Is this something we can easily change in the name list? I.e. n_blocks? Or will we need to run some pre-processing? If it's straightforward then I can just apply a patch to the name list from the tests... |
@dmey yes I understand what the checks are there for. My concern is that with coarse resolution, small domains and small run times the flow does not even get close to reaching the fully developed, continuously turbulent state that the LES is intended to model. The flow generally remains laminar with a small boundary layer developing in the lowest few cells (2 or 3 cells; depending on initial pertubation/ randomisation). The tests are therefore not indicative of the typical advection and subgrid scale diffusion that we want to model. However I am sure that any change to these will still result in a change in this initial development so perhaps it does not matter anyway. If we are not concerned by this then we can run the tests for 5 s as I mentioned above. If we did feel this was a problem we could have the tests as a warmstart where they read in files from a developed simulation and then model a 5 s period but of a more realistic case. The example simulations are all now running with a 64x64x64 domain size. I think this should be increased if we want these to be representative of typical simulations. It depends exactly what we want these examples to do @bss116? I can easily increase the size of the domain and the run time of these. The run time is not an issue for using them as tests as it only requires values in namoptions to be changed. However, changing the domain size means that different blocks.inp. files are needed which means the preprocessing must be rerun. The latest commit on bss/example-simulations has got it to the point where all the example simulations run and where you can run the preprocessing (from so-tg/preproc) on all of these to produce these set-ups. The final adjustments to make are 1) run times, 2) outputs, 3) domain size/ block configurations. After this we may want to discuss documentation and producing some example postprocessing of them etc. |
Yeah so I would say definitely increase the size where suitable, in my opinion the example cases should be realistic simulations. We should do a mix of simulations that can be easily run on a local machine (e.g. 64x64x64 for 001 and 002) and for the non-neutral ones we should have a domain size that is justifiable, but keeping it as small as possible. The namoptions of the examples still list a few unnecessary parameters (where they do not differ from the defaults), as discussed I think we should only put in the non-default switches to make it clear what has to change to set them up. I'm happy to go over them and do this. |
Yes would be good to get rid of unnecessary namoptions. All in the &INPS section at the moment are necessary for preprocessing I think. Agreed smaller cubes in 102 - sorry I had a note to refine the blocks across all of them but forgot to do this. I randomly made the geometry of 502 by writing the build.502 file manually. Happy for something else to be used. On a similar note, 201 uses LIDAR data but as the domain is quite small only has a couple of buildings. |
I had a start with doing this for the neutral simulations 001, 002. Please have a quick look I didn't delete anything important. I also changed to
Okay, will change this along with simplifying the namoptions.
Yeah, the 201 domain should probably increase as well. What would you recommend as domain sizes? When we know the size, I can make us a new random geometry for 502. |
I think the problem may be to do with the non-uniform colour scales in these plots - the values inside the buildings are changing the colour scales. I can double check if you send the .nc file. The middle plot does look slightly weird but I think is a result of what I posted above - again I can have a look. |
Ok I see. then regardless of whether we change the default of bldT, we should move it out of the energybalance section and move it into WALLS. I was not aware that this is being used outside the energybalance. we could even give it a default value of -1, to make sure this value is set by the user? |
It's value does not matter (if |
Alright. Let's leave the decision to #77 then, I'll note that we also discussed this here. I'll get going with the documentation in the next few days, will keep you posted! |
@tomgrylls quick question: is it on purpose that in 201 the temperature is determined by wall functions (iwalltemp = 2), but not the moisture? |
I added now a documentation for the example cases. Please have a look and feel free to make any changes to form and content. |
Hmm no I think moisture should be wall functions too. Think that is a mistake, but I have never run any simulations using. the energy balance. I guess we should really have checks on for the energy balance that both of these are set to 2? |
added iwallmoist = 2 in 201. test results are the same, 201 continues to work locally and breaks on HPC. |
I made a few small edits to example sims docs. Mainly just adding a bit of detail to the thermal BCs/ scalar sources etc. |
@bss116 is the problem with 201 running locally something we have already tried to solve? I have also noticed a potential problem in 502 that for these driven simulations you need to ensure there is sufficient space between the last block in the stremwise direction and the end of the domain to avoid problems with the convective outflow BCs. I didn't notice this as a problem when running this case so seems okay but maybe good to increase the buffer layer ( |
Running on my Mac works fine, the problem is on HPC (#82).
yeah I'm fine with that, we will only need to re-run the pre-processing for it, right? You should also mention this somewhere, e.g. in the examples document or probably even better in the pre-processing docs. |
I will redo preprocessing with larger buffer and add a line to the driver simulation details I put in the docs. |
@tomgrylls can you please add details to the driver parameters in 501 (example docs)? I just realised that I should have probably increased |
I saw there's lots of info on the driver parameters already in the other documentation, sorry. I'll just add a couple of sentences to the example docs. I was thinking of re-running 501 on the cluster with runtime 1000 s, tdriverstart = 950, dtdriver = 1 and driverstore = 51, what do you think about that? or should we keep the input files minimal and just use driverstore = 11? |
is there any restrictions on dtmax in &RUN of the driven simulation 502? does it need to be <= dtdriver of the precursor simulation 501? |
I think longer driver settings can be a good idea as you posted above. The driver files consist of y-z planes so their size shouldn't be too much of an issue at this scale of simulation.
I think it should be fine even if it is larger than dtdriver as the code will interpolate between the nearest two timesteps in the driver files. Ideally dt = dtmax = dtdriver in both simulations to avoid the need for interpolation but this is also not necessary. |
for 50 seconds it is 1.8 MB per file, so roughly 12 MB in total. Should we go with that, or increase even further to 100 s?
Okay, no need to add any restrictions on that then. It will still interpolate if the actual time step is below dtmax due to CFL, right? Doesn't sound like there is an easy way around interpolation. |
I think either is fine. Real simulations will need to do thousands of seconds so both of these is just showcasing how to do it in the example.
Yes the timestep being below dtmax also fine. The way to avoid interpolation is to set dtmax in both simulations and dtdriver in 501 to the same value below the minimum dt that we get in 501. The time step will then be constant and driver plane will be written every timestep. This is easier to do when you already know what the timestep is in the driver simulation - which we do. For example if the minimum timestep in 501 is 0.55 seconds then we run it now with dtmax = 0.5 and dtdriver = 0.5. Then. use these drivers for 502 with dtmax = 0.5 again. |
Good point. I'll add this as remark to the example simulations.
This is now going into the details, but could we also use a larger dtmax in the driver simulation and only restrict dtdriver? would that mean it uses the larger dt before tdriverstart and goes to smaller timesteps later? e.g. in 501 dtmax = 2, dtdriver = 0.5? |
I am happy for this to stay as it is - with driver fields interpolated and inflow not reaching the edge of the domain. But equally we could just set dtmax = 0.1 for both simulations and run for 100 s! I am happy either way |
I will leave it as it is for now, but we can change it anytime later -- see #84 (comment). |
Add example simulation setups in the
examples
folder with the namoptions reduced to a minimal version needed for the specific case. These cases should include:neutral stability case with blocks -- which forcing? one case for each forcing?
non-neutral case with temperature -- different forcings?
scalar release case
full energy balance
please expand the list!
The text was updated successfully, but these errors were encountered: