Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add example simulations #39

Closed
bss116 opened this issue Nov 27, 2019 · 65 comments
Closed

Add example simulations #39

bss116 opened this issue Nov 27, 2019 · 65 comments
Assignees
Labels
clean-up documentation Improvements or additions to documentation pre-processing
Milestone

Comments

@bss116
Copy link
Contributor

bss116 commented Nov 27, 2019

Add example simulation setups in the examples folder with the namoptions reduced to a minimal version needed for the specific case. These cases should include:

  • neutral stability case with blocks -- which forcing? one case for each forcing?

  • non-neutral case with temperature -- different forcings?

  • scalar release case

  • full energy balance

please expand the list!

@bss116 bss116 added documentation Improvements or additions to documentation clean-up pre-processing labels Nov 27, 2019
@bss116
Copy link
Contributor Author

bss116 commented Jan 16, 2020

I will update the example simulation with new namoptions for flow rate forcing and check the neutral simulations. @samoliverowens or @ivosuter can you take care of a full EB example? Further examples can be added later on.

@bss116 bss116 added this to the 0.1.0 milestone Jan 16, 2020
@bss116
Copy link
Contributor Author

bss116 commented Jan 16, 2020

I opened up a new branch for the example simulations: https://github.com/uDALES/u-dales/tree/bss/example-simulations.
More details on different options that could be included in the examples. I would suggest to lump a few of these together:

  1. building layouts:
  • canyon
  • staggered/aligned
  • complex layout: either realistic geometries or from urban landscape generator
  1. zgrid:
  • equidistant
  • stretched
  1. lateral boundary conditions:
  • periodic
  • inflow/outflow
  1. forcing (options for u, v, u+v):
  • pressure gradient
  • flow rate
  • free stream velocity
  1. scalars:
  • none
  • point source
  • line source
  1. temperature and buoyancy:
  • none (neutral simulation)
  • various options for temperature and buoyancy forcings (I'm not familiar with these...)
  • full energy balance

@samoliverowens
Copy link
Contributor

samoliverowens commented Mar 4, 2020

I've just uploaded a set of example simulations differing by geometry and whether the energy balance is on. The rest of the parameters don't correspond to any particular setup yet, and are meant to be changed.

  • 000: lflat = .true. (no buildings)
  • 100: lcube = .true. (linear cubes)
  • 150: lcube = .true., lEB = .true.
  • 200: lcanyons = .true. (canyons)
  • 300: lcastro = .true. (staggered cubes)
  • 400: lblocksfile = .true. (blocks generated from blocks.400)
  • 450: lblocksfile = .true., lEB = .true.
  • 500: lblocksfile = .true. ((blocks generated from blocks.500)
  • 550: lblocksfile = .true., lEB = .true.
  • 900: llidar = .true. (blocks generated from mean-gs-dis-rot.png - South Kensington LIDAR data)
  • 950: llidar = .true., lEB = .true.

@bss116
Copy link
Contributor Author

bss116 commented Mar 6, 2020

Great! With the cleaned-up namoptions it will be easier to set up nicely the different options. There are only a few parameters we need to set for every simulation, the others then depend on the setup. Here is a minimal list, example for neutral simulation with volume flow-rate forcing:

&RUN
iexpnr = 503
runtime = 14
trestart = 13
ladaptive = .true.
randu = 0.01
/

&OUTPUT
lxytdump = .true.
tsample = 2.0
tstatsdump = 10
/

&DOMAIN
imax = 64
jtot = 64
kmax = 64
xsize = 128
ysize = 128
/

&BC
wtsurf = 0.0
wqsurf = 0.0
thls = 288.0
z0 = 0.01
z0h = 6.7e-05
/

(these are needed for the bottom subroutine!)

&WALLS
nblocks = 41
nfcts = 57
iwallmom = 3
/

&PHYSICS
ps = 101500.0
igrw_damp = 0
uflowrate = 1.5
vflowrate = 1.5
luvolflowr = .true.
lvvolflowr = .true.
/

&DYNAMICS
ipoiss = 0
/

&NAMSUBGRID
lvreman = .true.
/

@bss116
Copy link
Contributor Author

bss116 commented Mar 12, 2020

@samoliverowens I moved your documentation on the boundary conditions to this branch, because I think it is a great start for guidelines on how to set up simulation-parameters. I also added the neutral simulation described above as example setup 503, and a file namoptions.xxx that has all switches to default and sorted as currently in the namoptions makeover (#64). Once everyone is happy with the new categorisation and the PR approved, we can use this as a basis for setting up the examples.

@bss116 bss116 pinned this issue Apr 1, 2020
@bss116
Copy link
Contributor Author

bss116 commented Apr 1, 2020

@samoliverowens I plotted the layouts and it seems that there is an error in example 300: there is one floor facet that overlaps with the last block (41 56 57 64 1 16):
41 56 41 64 0 0 34 0 0 0 0

@bss116
Copy link
Contributor Author

bss116 commented Apr 1, 2020

My suggestion for example setups:

  1. the most basic. lcube (100), periodic lateral BC, a constant pressure gradient in x
  2. lcanyons (200) with volume flow forcing in u and v, some scalars
  3. lstaggered with small cubes (similar setup to 150), some temperature (and a larger domain if needed for that)
  4. lblocksfile (500), a stretched z grid, inflow/outflow (can we always do this, or do we need a driver simulation + driver layout?)
  5. llidar (900), energy balance

What do you think about that? We can of course always add more features to it, this is just a start to get the geometry and basic switches set up.

@samoliverowens
Copy link
Contributor

@samoliverowens I plotted the layouts and it seems that there is an error in example 300: there is one floor facet that overlaps with the last block (41 56 57 64 1 16):
41 56 41 64 0 0 34 0 0 0 0

I agree that's not what we want, though I just checked and it matches what the old pre-processing routines produce, so it looks like a problem with how createfloors works for this case. @ivosuter or @tomgrylls may be able to shed some light on it.

@samoliverowens
Copy link
Contributor

My suggestion for example setups:

  1. the most basic. lcube (100), periodic lateral BC, a constant pressure gradient in x
  2. lcanyons (200) with volume flow forcing in u and v, some scalars
  3. lstaggered with small cubes (similar setup to 150), some temperature (and a larger domain if needed for that)
  4. lblocksfile (500), a stretched z grid, inflow/outflow (can we always do this, or do we need a driver simulation + driver layout?)
  5. llidar (900), energy balance

What do you think about that? We can of course always add more features to it, this is just a start to get the geometry and basic switches set up.

That sounds good, and with 5. we should have temperature & buoyancy on too.

@tomgrylls
Copy link
Contributor

I see the benefit of having a limited number of simulations of increasing complexity as opposed to a comprehensive set of them that systematically show the user all options (this would result in a lot of simulations as there are a lot of variables - see below). I think to do it like this we need to create some kind of table/ documentation for these simulations where it is v. clear to the user what is changing across them. The main variables being:

  • domain (size, stretch, morphology)
  • forcing in x and y (pressure gradient, vol flow rate, outflow rate, driven, coriolis)
  • lateral momentum BCs (periodic, driver)
  • lateral scalar BCs (periodic, inflow-outflow, driver)
  • scalar bottom BC/ IBM (zero flux, const. flux, iso, energy. balance)
  • scalar top BC (zero flux, const flux, iso)
  • passive scalar sources (point, line, network)
  • outputs (fielddump, averaging times, statsdumps)

Other variables being:

  • initial conditions (initial profiles)
  • large scale forcings (subsidence, volumetric sources)
  • nudging
  • chemistry
  • trees
  • purifiers
  • energy balance specifics - wall types, green roofs, radiation etc.

The above set of 5 simulations can provide a good coverage of these main variables. I would suggest we do not number them like this if the numbering does not have some systematic meaning. We could come up with some simple system (e.g. 0xx - neutral, 1xx non-neutral, 2xx energy balance, 5xx driver sims, 8xx trees etc.). To add some specificity and my recommendations to the above:

  1. 001 - Flat simulation, neutral, periodic lateral BCs, const. pressure gradient in x.
  2. 002 - lcube, neutral, periodic lateral BC, const. pressure gradient in x.
  3. 101 - lcanyons, non-neutral: isothermal BC from all IBM and top, periodic lateral BCs for all, volume flow rate forcing in x, passive scalar line sources.
  4. 102 - lstaggered, non-neutral: constant flux thermal BC from road roads, roofs and top, periodic lateral BCs for momentum and temp + inflow-outflow for passive scalar, volume flow rate forcing in x and y, two scalar point sources.
  5. 501 and 502 a) 501 - the driver simulation based off 101 with no passive scalars. b) 502 - lblocksfile non-neutral: isothermal IBM, driven simulation (inflow-outflow all), forcing: driver sim.
  6. 201 - llidar, non-neutral: energy balance, periodic BCs, coriolis forcing, nudging, stretched z-grid.

Vary domain sizes and fielddump/ statsdump outputs across all of these.

  • on tomgrylls/trees-driver-patch branch:
  1. 801 - tree simulation I have set-up
  2. 901 - purifier simulation I have set-up.

This does not cover all the variables discussed and ideally I think these examples should be set-up systematically by varying the code functionalities around one simple base case. But this would require a large number of example simulations that may also be messy so I think something like this sounds good.

Thoughts on this? I should be able to set all these up pretty quick? Would we want to remove what is currently in bss/example-simulations and only have the above?

@dmey
Copy link
Contributor

dmey commented Apr 16, 2020

Thoughts on this? I should be able to set all these up pretty quick? Would we want to remove what is currently in bss/example-simulations and only have the above?

I think this would be really good. 👍

@tomgrylls Are all the above a super set of the current tests. If this is the case, we could simply run tests based on these name lists thus making removing duplication and making testing easier to maintain. Any thoughts?

@tomgrylls
Copy link
Contributor

The tests would be more thorough based on the above - but this would require running quite a few more separate simulations each time in testing? We could base the tests off of a limited number of these e.g. 002,501/ 502 and 201.

It depends slightly on the size of the domains/ resolutions that we set in the above. If these are relatively small for the chosen example simulations then all that would be necessary is to use the same input files but with runtime and tfielddump reduced accordingly. If the domain size has to change then so will the .inp. files in tests. I think this is a good idea but unless we change domain sizes it would mean that the example simulations are pretty low quality and with coarse resolutions and relatively small domains.

@tomgrylls
Copy link
Contributor

@bss116 @samoliverowens I will make a start on updating the existing example simulation namoptions files in bss/example-simulations to work with the namoptions restructuring. And, as I do this, try and produce the above set of simulations. Let me know of any comments/ other things you would like to be done.

@bss116
Copy link
Contributor Author

bss116 commented Apr 16, 2020

I think all of the above sounds really good, yes feel free to remove the current example simulations, they were just for initial ideas. I wouldn't worry about the naming of the simulations too much, I'd simply call them 001 -- 006. We will write a separate document (maybe part of the setting-up simulations?) where we give an overview of the setup and required parameters/switches.

Personally I wouldn't include a flat simulation in the examples, as this is not a common case for using uDALES (but of course keeping the functionality of it, as it may be used as additional control case).

For the domain size, I suggest to keep at least one of them relatively small (maybe the neutral one with cubes) such that this can be used as a quick example/debug simulation.

@dmey
Copy link
Contributor

dmey commented Apr 16, 2020

It depends slightly on the size of the domains/ resolutions that we set in the above

What if we keep the size unchanged and instead run for a very short time? Would that be enough to capture some meaningful data that we can use to benchmark one uDALES version with another? If that is the case, the we can simply patch the namelist programmatically before ruining uDALES. We can do that for the domain size of course but if I remember correctly there is a bit more to it then just changing a couple of vars on the namelist!?

@tomgrylls
Copy link
Contributor

@dmey how many cores do we run on the CI? We are never going to have a fully developed simulation as a test unless we use warmstart files (which could be a good idea to test the model in the state we are interested in - continuous, fully turbulent). If we are not going to do this then seeing as we will unlikely allow it to fully develop we may as well run simulations for small periods e.g. 5 seconds. Even with these short run times we will still want fairly small numbers of cells ~1000000 or less as computation time rises rapidly especially if we are limited to few cores.

I have made a start on the above set of example simulations. I have made 001, 002, 101, 102, 201. I still need to 1) make 501, 502, 2) adjust domain sizes, run times and outputs across these and 3) run tests of all these simulations. There may be mistakes in what I have done so far so I would wait until Tuesday before reviewing them as I will likely make changes.

For 201 I have added the image file for the LIDAR as I used this in preprocessing but the file is 9.6 MB. Do we want this in the experiment folder or elsewhere? Or at all?

@dmey
Copy link
Contributor

dmey commented Apr 17, 2020

@dmey how many cores do we run on the CI? We are never going to have a fully developed simulation as a test unless we use warmstart files (which could be a good idea to test the model in the state we are interested in - continuous, fully turbulent). If we are not going to do this then seeing as we will unlikely allow it to fully develop we may as well run simulations for small periods e.g. 5 seconds.

2 but just to clarify in case this is not clear, the current tests (integration and regression tests) are not designed to tell us if the model is sound but to simply check if we introduce a change that may impact the simulation results from one version/branch to another. If you introduce a change in a new commit, we check against master that that new commit has not changed the results -- obviously, given the large number of switches in the name list, one would need to run the model for all permutations and check all outputs quantities from uDALES to be sure. Here we take a more pragmatic approach and only run the model for a set of name list and check for outputs we deem important. I think it's file to run short simulations if we are looking for equality between two uDALES version (assuming that the switches for the extra parametrizations are active for that short time period). I do not think it's currently feasible in the current set up to investigate what effect changes will have after n integration time steps.

Even with these short run times we will still want fairly small numbers of cells ~1000000 or less as computation time rises rapidly especially if we are limited to few cores.

Is this something we can easily change in the name list? I.e. n_blocks? Or will we need to run some pre-processing? If it's straightforward then I can just apply a patch to the name list from the tests...

@tomgrylls
Copy link
Contributor

@dmey yes I understand what the checks are there for. My concern is that with coarse resolution, small domains and small run times the flow does not even get close to reaching the fully developed, continuously turbulent state that the LES is intended to model. The flow generally remains laminar with a small boundary layer developing in the lowest few cells (2 or 3 cells; depending on initial pertubation/ randomisation). The tests are therefore not indicative of the typical advection and subgrid scale diffusion that we want to model. However I am sure that any change to these will still result in a change in this initial development so perhaps it does not matter anyway. If we are not concerned by this then we can run the tests for 5 s as I mentioned above. If we did feel this was a problem we could have the tests as a warmstart where they read in files from a developed simulation and then model a 5 s period but of a more realistic case.

The example simulations are all now running with a 64x64x64 domain size. I think this should be increased if we want these to be representative of typical simulations. It depends exactly what we want these examples to do @bss116? I can easily increase the size of the domain and the run time of these. The run time is not an issue for using them as tests as it only requires values in namoptions to be changed. However, changing the domain size means that different blocks.inp. files are needed which means the preprocessing must be rerun.

The latest commit on bss/example-simulations has got it to the point where all the example simulations run and where you can run the preprocessing (from so-tg/preproc) on all of these to produce these set-ups. The final adjustments to make are 1) run times, 2) outputs, 3) domain size/ block configurations. After this we may want to discuss documentation and producing some example postprocessing of them etc.

@tomgrylls tomgrylls mentioned this issue Apr 23, 2020
@bss116
Copy link
Contributor Author

bss116 commented Apr 23, 2020

Yeah so I would say definitely increase the size where suitable, in my opinion the example cases should be realistic simulations. We should do a mix of simulations that can be easily run on a local machine (e.g. 64x64x64 for 001 and 002) and for the non-neutral ones we should have a domain size that is justifiable, but keeping it as small as possible.
However if we do this, then we can/should probably not have the same simulations for examples and test cases. But also for the tests the geometry is probably not that important. Could we even just use a single test geometry and test it with different namoption parameters...?

The namoptions of the examples still list a few unnecessary parameters (where they do not differ from the defaults), as discussed I think we should only put in the non-default switches to make it clear what has to change to set them up. I'm happy to go over them and do this.
For the case of variety, shall we maybe use smaller cubes in 102 (blockwidth and canyonwidth 8 instead of 16)?
Is the geometry of 502 coming from somewhere specific or is it just an example setup? If so, I would increase the building density, could use for example one of my heterogeneous layouts.

@tomgrylls
Copy link
Contributor

Yes would be good to get rid of unnecessary namoptions. All in the &INPS section at the moment are necessary for preprocessing I think.

Agreed smaller cubes in 102 - sorry I had a note to refine the blocks across all of them but forgot to do this. blockwidth = 8 is good.

I randomly made the geometry of 502 by writing the build.502 file manually. Happy for something else to be used.

On a similar note, 201 uses LIDAR data but as the domain is quite small only has a couple of buildings.

@bss116
Copy link
Contributor Author

bss116 commented Apr 23, 2020

Yes would be good to get rid of unnecessary namoptions. All in the &INPS section at the moment are necessary for preprocessing I think.

I had a start with doing this for the neutral simulations 001, 002. Please have a quick look I didn't delete anything important. I also changed to ipoiss = 0 and iwallmom = 3. Or should we keep =2 as long as this is not thoroughly tested? If = 3, do we then still need the Tfacinit.inp at all?

Agreed smaller cubes in 102 - sorry I had a note to refine the blocks across all of them but forgot to do this. blockwidth = 8 is good.

Okay, will change this along with simplifying the namoptions.

I randomly made the geometry of 502 by writing the build.502 file manually. Happy for something else to be used.

On a similar note, 201 uses LIDAR data but as the domain is quite small only has a couple of buildings.

Yeah, the 201 domain should probably increase as well. What would you recommend as domain sizes? When we know the size, I can make us a new random geometry for 502.

@bss116
Copy link
Contributor Author

bss116 commented May 5, 2020

Yes agreed. Perhaps we should also just adjust the outputs of each simulation. For example, the simulation with infinite canyons should output ytdump, while others are better suited to tdump and/ or xytdump.

Yes sorry, forgot about that. Also runtimes could be adjusted. Maybe 50 or 100 s?

I am a bit confused about the 3D-temperature output fields, where some x or y slices have a uniform temperature distribution with different temperatures inside the blocks, and other slices show turbulent fields. But I am guessing this is alright...?

I think I know what you mean. If this is referring to what I think, then for all scalar fields (temp, moisture, passive scalars) we overwrite the values at the edge of the blocks to be equal to the adjacent cells to minimise the subgrid diffusion that may occur in the IBM. This is particularly important at startups of simulations. In that case this is to be expected. If not can you post a couple of screenshots?

I'm not quite sure what I am seeing here. Attached are scalar and temperature fields for 101.
101-thl
101-scalar

@tomgrylls
Copy link
Contributor

I think the problem may be to do with the non-uniform colour scales in these plots - the values inside the buildings are changing the colour scales. I can double check if you send the .nc file. The middle plot does look slightly weird but I think is a result of what I posted above - again I can have a look.

@bss116
Copy link
Contributor Author

bss116 commented May 5, 2020

ah yes of course! you are right, with the colorbar capped between 285-288 it looks alright. But what about the temperature of 0 K inside the blocks...?
101-thl-54
101-thl-cap

@tomgrylls
Copy link
Contributor

This is because bldT default is 0. That's one of the reasons I think just changing this default is not such a bad idea in #77. This plotting issue will also be resolved in the future in #57, ideally we then set the value in the blocks to the dummy variable and they are then read as NaN.

@bss116
Copy link
Contributor Author

bss116 commented May 5, 2020

Ok I see. then regardless of whether we change the default of bldT, we should move it out of the energybalance section and move it into WALLS. I was not aware that this is being used outside the energybalance. we could even give it a default value of -1, to make sure this value is set by the user?

@tomgrylls
Copy link
Contributor

It's value does not matter (if lEB = .false.) - there is just a line at the initialisation stage of modibm that sets the 'internal' temperatures to bldT. I think this line was intended to avoid the SGS diffusiom through the walls I discussed above. But regardless we now do this in a better way by updating the temperatures at the edge of the buildings at every time step. I am pretty confident we can take this line out but equally I think it has no effect. There is a small chance it will in the first iteration of the first time step but then a different value to bldT would be preferable anyway.

@bss116
Copy link
Contributor Author

bss116 commented May 5, 2020

Alright. Let's leave the decision to #77 then, I'll note that we also discussed this here.

I'll get going with the documentation in the next few days, will keep you posted!

@bss116
Copy link
Contributor Author

bss116 commented May 12, 2020

@tomgrylls quick question: is it on purpose that in 201 the temperature is determined by wall functions (iwalltemp = 2), but not the moisture?

@bss116
Copy link
Contributor Author

bss116 commented May 12, 2020

I added now a documentation for the example cases. Please have a look and feel free to make any changes to form and content.

@tomgrylls
Copy link
Contributor

@tomgrylls quick question: is it on purpose that in 201 the temperature is determined by wall functions (iwalltemp = 2), but not the moisture?

Hmm no I think moisture should be wall functions too. Think that is a mistake, but I have never run any simulations using. the energy balance. I guess we should really have checks on for the energy balance that both of these are set to 2?

@bss116
Copy link
Contributor Author

bss116 commented May 13, 2020

added iwallmoist = 2 in 201. test results are the same, 201 continues to work locally and breaks on HPC.

@tomgrylls
Copy link
Contributor

I made a few small edits to example sims docs. Mainly just adding a bit of detail to the thermal BCs/ scalar sources etc.

@tomgrylls
Copy link
Contributor

@bss116 is the problem with 201 running locally something we have already tried to solve?

I have also noticed a potential problem in 502 that for these driven simulations you need to ensure there is sufficient space between the last block in the stremwise direction and the end of the domain to avoid problems with the convective outflow BCs. I didn't notice this as a problem when running this case so seems okay but maybe good to increase the buffer layer (pad in namoptions) for good practice.

@bss116
Copy link
Contributor Author

bss116 commented May 13, 2020

@bss116 is the problem with 201 running locally something we have already tried to solve?

Running on my Mac works fine, the problem is on HPC (#82).

I have also noticed a potential problem in 502 that for these driven simulations you need to ensure there is sufficient space between the last block in the stremwise direction and the end of the domain to avoid problems with the convective outflow BCs. I didn't notice this as a problem when running this case so seems okay but maybe good to increase the buffer layer (pad in namoptions) for good practice.

yeah I'm fine with that, we will only need to re-run the pre-processing for it, right? You should also mention this somewhere, e.g. in the examples document or probably even better in the pre-processing docs.

@tomgrylls
Copy link
Contributor

I will redo preprocessing with larger buffer and add a line to the driver simulation details I put in the docs.

@bss116
Copy link
Contributor Author

bss116 commented May 13, 2020

@tomgrylls can you please add details to the driver parameters in 501 (example docs)? I just realised that I should have probably increased tdriverstart and added tdriverdump when running 501 for 1000 seconds, right? Will have to do this again for the driver files in 502.

@bss116
Copy link
Contributor Author

bss116 commented May 14, 2020

I saw there's lots of info on the driver parameters already in the other documentation, sorry. I'll just add a couple of sentences to the example docs. I was thinking of re-running 501 on the cluster with runtime 1000 s, tdriverstart = 950, dtdriver = 1 and driverstore = 51, what do you think about that? or should we keep the input files minimal and just use driverstore = 11?

@bss116
Copy link
Contributor Author

bss116 commented May 14, 2020

is there any restrictions on dtmax in &RUN of the driven simulation 502? does it need to be <= dtdriver of the precursor simulation 501?

@tomgrylls
Copy link
Contributor

I saw there's lots of info on the driver parameters already in the other documentation, sorry. I'll just add a couple of sentences to the example docs. I was thinking of re-running 501 on the cluster with runtime 1000 s, tdriverstart = 950, dtdriver = 1 and driverstore = 51, what do you think about that? or should we keep the input files minimal and just use driverstore = 11?

I think longer driver settings can be a good idea as you posted above. The driver files consist of y-z planes so their size shouldn't be too much of an issue at this scale of simulation.

is there any restrictions on dtmax in &RUN of the driven simulation 502? does it need to be <= dtdriver of the precursor simulation 501?

I think it should be fine even if it is larger than dtdriver as the code will interpolate between the nearest two timesteps in the driver files. Ideally dt = dtmax = dtdriver in both simulations to avoid the need for interpolation but this is also not necessary.

@bss116
Copy link
Contributor Author

bss116 commented May 15, 2020

I think longer driver settings can be a good idea as you posted above. The driver files consist of y-z planes so their size shouldn't be too much of an issue at this scale of simulation.

for 50 seconds it is 1.8 MB per file, so roughly 12 MB in total. Should we go with that, or increase even further to 100 s?

is there any restrictions on dtmax in &RUN of the driven simulation 502? does it need to be <= dtdriver of the precursor simulation 501?

I think it should be fine even if it is larger than dtdriver as the code will interpolate between the nearest two timesteps in the driver files. Ideally dt = dtmax = dtdriver in both simulations to avoid the need for interpolation but this is also not necessary.

Okay, no need to add any restrictions on that then. It will still interpolate if the actual time step is below dtmax due to CFL, right? Doesn't sound like there is an easy way around interpolation.

@tomgrylls
Copy link
Contributor

I think longer driver settings can be a good idea as you posted above. The driver files consist of y-z planes so their size shouldn't be too much of an issue at this scale of simulation.

for 50 seconds it is 1.8 MB per file, so roughly 12 MB in total. Should we go with that, or increase even further to 100 s?

I think either is fine. Real simulations will need to do thousands of seconds so both of these is just showcasing how to do it in the example.

is there any restrictions on dtmax in &RUN of the driven simulation 502? does it need to be <= dtdriver of the precursor simulation 501?

I think it should be fine even if it is larger than dtdriver as the code will interpolate between the nearest two timesteps in the driver files. Ideally dt = dtmax = dtdriver in both simulations to avoid the need for interpolation but this is also not necessary.

Okay, no need to add any restrictions on that then. It will still interpolate if the actual time step is below dtmax due to CFL, right? Doesn't sound like there is an easy way around interpolation.

Yes the timestep being below dtmax also fine. The way to avoid interpolation is to set dtmax in both simulations and dtdriver in 501 to the same value below the minimum dt that we get in 501. The time step will then be constant and driver plane will be written every timestep. This is easier to do when you already know what the timestep is in the driver simulation - which we do.

For example if the minimum timestep in 501 is 0.55 seconds then we run it now with dtmax = 0.5 and dtdriver = 0.5. Then. use these drivers for 502 with dtmax = 0.5 again.

@bss116
Copy link
Contributor Author

bss116 commented May 15, 2020

for 50 seconds it is 1.8 MB per file, so roughly 12 MB in total. Should we go with that, or increase even further to 100 s?

I think either is fine. Real simulations will need to do thousands of seconds so both of these is just showcasing how to do it in the example.

Good point. I'll add this as remark to the example simulations.

I think it should be fine even if it is larger than dtdriver as the code will interpolate between the nearest two timesteps in the driver files. Ideally dt = dtmax = dtdriver in both simulations to avoid the need for interpolation but this is also not necessary.

Okay, no need to add any restrictions on that then. It will still interpolate if the actual time step is below dtmax due to CFL, right? Doesn't sound like there is an easy way around interpolation.

Yes the timestep being below dtmax also fine. The way to avoid interpolation is to set dtmax in both simulations and dtdriver in 501 to the same value below the minimum dt that we get in 501. The time step will then be constant and driver plane will be written every timestep. This is easier to do when you already know what the timestep is in the driver simulation - which we do.

For example if the minimum timestep in 501 is 0.55 seconds then we run it now with dtmax = 0.5 and dtdriver = 0.5. Then. use these drivers for 502 with dtmax = 0.5 again.

This is now going into the details, but could we also use a larger dtmax in the driver simulation and only restrict dtdriver? would that mean it uses the larger dt before tdriverstart and goes to smaller timesteps later? e.g. in 501 dtmax = 2, dtdriver = 0.5?
I guess if not we could get the same result by running a warmstart simulation for the driver...

@bss116
Copy link
Contributor Author

bss116 commented May 15, 2020

I have now run 502 with driver inputs for 50 s at 0.5 s time steps. The dt in 501 after 1000s actually went down to 0.28, and in 502 the dt is only 0.17, therefore we still need to interpolate the driver input. after 50 s runtime, the turbulence hasn't quite reached the end of the domain yet. Do we care, or is it alright like that for the example case?
502-50s

@tomgrylls
Copy link
Contributor

I am happy for this to stay as it is - with driver fields interpolated and inflow not reaching the edge of the domain. But equally we could just set dtmax = 0.1 for both simulations and run for 100 s! I am happy either way

@bss116
Copy link
Contributor Author

bss116 commented May 20, 2020

I will leave it as it is for now, but we can change it anytime later -- see #84 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
clean-up documentation Improvements or additions to documentation pre-processing
Projects
None yet
Development

No branches or pull requests

4 participants