Simulating Truly Multi-Scale Problems and Floating Point Woes

Simulating Truly Multi-Scale Problems and Floating Point Woes

At the moment, a hot topic in the world of modelling and simulation is multi-scale modelling. This is seen as a required component of a larger goal to produce full digital twins of real-world problems. After all how can we hope to simulate the true physics of something when some problems simply don’t have macroscopic (or even mesoscopic) descriptions available for implementation as modelling software?

Recently I have been focussed on simulating a particular example of a truly multi-scale problem, evaporation. In this post I want to briefly discuss some of the more interesting considerations I have come across whilst looking into this and make a note for the future of this work.

We started with the simple task of “producing a re-usable coupled software framework to simulate the process of evaporation”. Sounds simple right? Unfortunately not. Most current studies utilise macroscopic models for evaporation that parameterise the important flux coefficients that determine the mass transfer between liquid and gas phases that occur during evaporation. Effectively this transition is treated as a hard boundary. These models have been shown to produce reasonable results but from a physical perspective you simply can’t escape the fact that this problem is more complex. In fact, where the phase change occurs is actually characterised by three distinct problems that need to be considered at three length scales:

  1. The macroscopic liquid phase
  2. A small but distinct microscopic interface layer
  3. A larger (but still small) mesoscopic Knudsen layer
  4. The macroscopic gas phase

Macroscopic modelling wraps up points 2 and 3 as a set of coefficients.

Clearly to tackle this we needed a full, multi-scale framework. In the past I have posted about the MNF groups various software efforts, we have a Direct Simulation Monte Carlo (DSMC) solver named dsmcFoam+ and we also have a classical Molecular Dynamics (MD) solver called mdFoam+. I have also spoken about a general coupling framework called the Multiscale Universal Interface (MUI). This new work brings all three elements together to produce a component that can tackle points 2 and 3 directly.

So in effect, what we are talking about is a simultaneous coupling between MD and DSMC. I presented some initial results from this work at the ParCFD conference earlier this year (https://tinyurl.com/wg8ys7q) but we are now nearing completion of that work (look out for upcoming publications!) and so what can I say about what I’ve learned aside from the obvious problems of getting the basic physical idea of combining MD and DSMC to work (more on that in upcoming publications), well:

  1. Coupling codes, even those written by the same group in the same basic software framework (OpenFOAM) is still not trivial.
  2. The MUI software works well but has needed significant expansion in capability (all available in the GitHub repository https://github.com/MxUI) to cope with particle-particle problems like this one where no specific interpolation is needed but lots of data needs to be transferred per particle.
  3. Coupling codes that work at extremely different floating point numerical scales is trickier than you would imagine!

I want to pick up this final point as my main take-home message from this post as it is a very important coupling consideration for any scenario.

mdFoam+ and dsmcFoam+ are designed to tackle very different problem sets. MD codes nearly all use a system of reduced units as they are always used to simulate very small things, this ensures the (floating point) numerics involved are not outside the realms of sensible values when simulating nanometer scale domains at femtosecond scale time-steps. DSMC codes however can be used across the scales from km scale problems all the way down to nm problems. Typically therefore they use direct values. In itself this is not too troubling for how robust a DSMC code is, everything they do is stochastic after all, so if we loose a little accuracy due to FP rounding its no big deal.

However… when we are trying to directly match values from an MD simulation working in reduced units for a nm scale problem (i.e. 0-1000) and a DSMC simulation working in direct values (i.e. 0-1e-9) then suddenly we have a problem. I don’t have any good answers, just problems and “hacks” to solve them, the main thrust of this is awareness, however to finish the post I will just leave a nice neat list of things to consider:

  1. Unit conversion: When MD uses reduced units, a set of initial units is calculated according to static values (reference length, time etc.), ultimately this will introduce FP errors into everything that is done but importantly, it is all consistent in its level of error. When you transfer values (i.e. particle position, velocity etc.) in reduced units from one side then convert to non-reduced values on the other side (and vice versa) this will inevitably mean things don’t quite match up as you expected. You will need to catch the effects of FP rounding error on both sides and deal with it as you see best…
  2. Time-stepping: An important one, when coupling codes you need to handle how best to work out when one side has simulated to a certain point in time compared to the other. Libraries like MUI allow you to use a simple numerical label for each set of data you transfer between solvers, great, sounds easy right? Yes.. but only when things are consistent. Even if both solvers are iterating at exactly the same frequency, if they are not in the same frame of reference then FP errors will mean they actually iterate at slightly different points in time. MUI is robust to this, to an extent, but in my extreme case it simply isn’t. The best solution in all cases is to use an integer iteration count wherever possible.
  3. Post-processing: Points 1 and 2 both apply when post-processing any data you produce. Ultimately if you have coupled things together you will want to look at the output together as well, in fact if you are calculating averages you will want to combine the data. You might have made the simulation framework robust to FP error, but what about your post-processing techniques?

All told, we have managed to produce a new direct simultaneous coupled framework for MD to DSMC problems and this will be published in the coming months but in the meantime please be mindful of the above and happy coupling!