Universal Coupling (plus the EMiT 2016 EMergIng Technology conference)

Much of the research undertaken within the Micro & nano flows group relies on software to compute new scientific results. There are plenty of examples of this within the history of this blog but one area of increasing importance is that of coupling codes together to solve multi-scale or multi-physics problems with more than one piece of software.

I have talked about coupling in the past but this time I wanted to briefly describe the concept of universal coupling. This idea is gaining traction at the moment within reearch communities around the world as well as major softare vendors, in a nutshell it is the idea of providing a universal interaction layer or glue that can stick together any type of software that solves a scientific problem to make up a larger whole to solve more complex problems than any of the individual components can solve on their own. 

In the past I mentioned a number of software frameworks for solving multi-scale/multi-physics problems, one example was the MUSCLE2 library (link) which came from the European MAPPER project, the same consortia are also behind the H2020 funded COMPAT project. The intereseting thing about large solutions like these though is that their use and integration is inherently difficult because of the scale of what they are trying to achieve. 

A number of solutions have become apparant that aim to solve the problem of universal coupling in a less intrusive way, in the past I mentioned EDFs PLE wrapper that comes as part of their Code_Saturne CFD software, this uses the concept of data transport at a set of points to allow transfer of data between solutions. The basic premise is that, regardless of the form of a solver (i.e. whether it is mesh based or not or whether it is a continuum solver or not) data can always be sampled at specific points and sampled data can be imparted on another solution from said points. From a software engineering perspective the challenge is not too great, of course like anything, to do it well is always a big challenge, but precedent for methods to achieve this sort of communication framework are well established. The key challange is to ensure loss of simulation fidelity at the point of coupling is either addressed or at least managed. 

Primarily the key questions are:

1) How do I sample my solution at a specific point while maintaining the level of accuracy I desire/need (i.e. is it OK to perform a linear interpolation of surrounding cells/other discrete entities or is soemthing else required?)

2) How do I consume information stored in data set at a specific point within my solution (i.e. I know that an external force exists in my simulation domain at point x,y,z because a coupled simulation has told me so, but I have no exact discrete location within my solution that matches this point, is it OK to interpolate a new value from the coupled data and if so, using what method and if not, how do I ovecome this?)

Generic solutions like PLE take the stance that they provide the coupling mechanism but it is up to the individual software developers using it to define how data is imparted and consumed from the points. 

A new solution has recently begun take-up within our group that starts to bridge the likes of MUSCLE2 and PLE by working in the simplistic manner of PLE but by being designed to allow developers to easily add their own data storage/impartment methods so the library can grow into a useful code base for many different method types. Originally developed within the Applied Mathematics division at Brown University in the USA, it is called the Multiscale Universal Interface (MUI) and is available to download from GitHub. In some ways, what MUI offers is fairly obvious when you take a step back, however its key strength is that it has been designed in a well-engineered manner to be both extensible and as light-weight as possible in that it is a header only C++ library (that currently provides wrappers for C and Fortran as well). It makes use of MPI for its communications but does so in a way that won't interfere with existing MPI comms, so multiple MPI applications can use MUI to interact.

The library is currently being tested within our group and should it prove a good way forward we will aim to collaboratively expand its current capabilities with the original research team at Brown, with the results making their way back into the software's repository.

In other news, a snippet of the Micro & Nano flows grouop work was recently on display at the 2016 emerging technology (or EMiT) conference at the Barcelona Supercomputing Centre in Spain. This conference looks to provide a platform for those using or developing the latest emerging trends in computing, be that software or hardware. We showed off some of the groups cutting edge work on the IMM method (coupling MD and DSMC) as well as some GPU porting work for our MD code.