The majority of my current working time is spent considering how best to couple together solvers of different types to solve either multi-physics or multi-scale problems. As I do so I find myself asking the same questions and, over time, coming up with slightly different answers.
When I first started to consider the sorts of multiscale problems that the MNF group tackles, I approached things as a computer scientist. I was reliably informed by those far more intelligent than I that some flow problems can be solved using a continuum regime while others simply cannot, ergo, there is a clear need to couple together disparate methods to achieve a working modelling package. Coupling in this context could mean lots of things, monolithic couplings where solvers are all joined up because there is enough similarity in the way they compute things to make this possible, or perhaps domain-decomposed techniques where different parts of the overall problem are handled by very different solvers that communicate with each other in some way to make the overall physics work. The physics of a coupling can be strong or weak, the methods used can be tight or loose, these phrases are all starting to converge and there is clarity as to what they mean. As a computer scientist therefore, all that I needed to do was allow data to be transferred efficiently between solvers and then work out how to use this data to make sure the physics of a problem was properly captured.
As with any good computer scientist, the obvious thing to do is to concentrate my work on a specific software library, thus allowing re-use of past work in new situations more easily (https://mxui.github.io/), the goal of any library is to generalise a problem to allow for re-use in as many situations as possible and the Multiscale Universal Interface (MUI) does just that, it converts problems to a non-domain specific representation, transfers data between solvers using the most portable method available (MPI) and introduces no new library requirements to any software it is integrated into.
Currently I am using MUI to enable a tri-scale coupled methodology where continuum CFD is coupled to Molecular Dynamics, which in turn is coupled to Direct Simulation Monte Carlo and then back to CFD again, with the goal of directly simulating the process of evaporation. This work will first be presented at the ParCFD conference in Antalya, Turkey in a few weeks time (http://www.parcfd.org/2019/).
So given all of this, what does the title of this post actually mean? Clearly I have stated that multiscale problems must mean coupling?
The thing is, I don’t think that is true. it may be true in the context of most modelling and simulation methods currently available but it is clear that the ideal would be it isn’t true and this is what we should be working towards. We currently find problems that require a multiscale approach (large separation of time/space or both between methods) and naturally try and connect up methods that solve the different scales. The problem is, this is never going to be the ideal, it is never preferable, it is simply the best we currently have (and realistically may always be, but one can dream!)
Solving multi-physics problems, where different sets of equations are solved using similar discretisation methods, is relatively easy to justify, but when connecting completely dissimilar methods (e.g. MD to CFD) there is always the need to determine physical data from the MD results that can actually be connected to the CFD simulation, i.e. calculating macroscopic properties. When we do this, fidelity has to be lost. Other, non-direct approaches to multi-scale simulation, for example where a macroscopic method is parameterised with the results from an underlying microscopic method, limit this problem and can be more easily justified but not all situations, especially those involving physical flow phenomena, can be worked this way.
To summarise then, multiscale modelling (and associated coupling methodologies) are progressing, but the nature of coupling together methods at different scales is always going to be imperfect, undoubtedly clever mathematics comes to the rescue somewhat, with methods like Uncertainty Quantification able to at least define what fidelity is lost due to a coupling but the state of what is possible is still embryonic in nature, it will be very interesting to see how things have moved on in 10 years time. Personally, a key area to better understand is how to recover macroscopic properties from the likes of MD, when using MD directly and calculating physical properties for comparison to experiment, a level of “jitter” and inaccuracy is expected, when transferring this information to a macroscopic CFD solver and using it directly as part of the solution, understanding this jitter and inaccuracy becomes key, this is more important in some ways than the coupled software, though of course until the coupled solutions exist, clever people can’t concentrate on understanding where problems exist, so current approaches an research is laying important foundations.