Micro & Nano Flows for Engineering
The micro & nano flows group is a research partnership between the Universities of Warwick and Edinburgh, and Daresbury Laboratory. We investigate gas and liquid flows at the micro and nano scale (where conventional analysis and classical fluid dynamics cannot be applied) using a range of simulation techniques: molecular dynamics, extended hydrodynamics, stochastic modelling, and hybrid multiscaling. Our aim is to predict and understand these flows by developing methods that combine modelling accuracy with computational efficiency.
Targeted applications all depend on the behaviour of interfaces that divide phases, and include: radical cancer treatments that exploit nano-bubble cavitation; the cooling of high-power electronics through evaporative nano-menisci; nanowire membranes for separating oil and water, e.g. for oil spills; and smart nano-structured surfaces for drag reduction and anti-fouling, with applications to low-emissions aerospace, automotive and marine transport.
EPSRC Programme Grant in Nano-Engineered Flow Technologies
Our work is supported by a number of funding sources (see below), including a 5-year EPSRC Programme Grant (2016-2020). This Programme aims to underpin future UK innovation in nano-structured and smart interfaces by delivering a simulation-for-design capability for nano-engineered flow technologies, as well as a better scientific understanding of the critical interfacial fluid dynamics.
We will produce software that a) resolves interfaces down to the molecular scale, and b) spans the scales relevant to the engineering application. As accurate molecular/particle methods are computationally unfeasible at engineering scales, and efficient but conventional fluids models do not capture the important molecular physics, this is a formidable multiscale problem in both time and space. The software we develop will have embedded intelligence that decides dynamically on the correct simulation tools needed at each interface location, for every phase combination, and matches these tools to appropriate computational platforms for maximum efficiency.
This work is strongly supported by nine external partners (see below).
- “Nano-Engineered Flow Technologies: Simulation for Design across Scale and Phase” EPSRC Programme Grant EP/N016602/1 (£3.4M)
- “The First Open-Source Software for Non-Continuum Flows in Engineering” EPSRC grants: EP/K038427/1 K038621/1 K038664/1 07/13-06/17 (£0.9M)
- “Multiscale Simulation of Interfacial Dynamics for Breakthrough Nano/Micro-Flow Engineering Applications” ARCHER Leadership Project 11/15-10/17 (£60k in supercomputer computational resource)
- “Skating on Thin Nanofilms: How Liquid Drops Impact Solids” Leverhulme Research Project Grant (£146k funding a 3-year PDRA)
- Airbus Group Ltd
- Bell Labs
- European Space Agency
- Jaguar Land Rover
- National Physical Laboratory
- Oxford Biomedical Engineering (BUBBL)
- TotalSim Ltd
- Waters Corporation
Latest news and blogs
Prof. Duncan Lockerby, University of Warwick
The micro & nano flows group will be co-organising the next International Symposium on Rarefied Gas Dynamics (RGD31). This prestigious and long-running event brings experts together, from across the globe, to discuss the curious behaviour of gas flows at very small scales or at high altitudes (i.e. in rarefied/non-equilibrium conditions). The conference will be held in Glasgow in July 2018. Download the First Announcement, here.
Anirudh Singh Rana and Mykyta Chubynsky have begun postdoctoral positions Warwick, where they will be based in the Mathematics Institute. Duncan and James are looking forward to working with Anirudh, who will be developing models for evaporation from nano-menisci, and Mykyta, who will incorporate rarefied gas dynamics into free surface flow phenomena. Anirudh and Mykyta are both keen to take the opportunity to collaborate with researchers from Daresbury and Edinburgh.
Jason Reese was an Invited Lecturer on the Advanced School in "Multiscale Modeling of Flowing Soft Matter and Polymer Systems” in the International Centre for Mechanical Sciences (CISM), Udine, Italy from 25th to 29th July, 2016 (www.cism.it/courses/C1610/). Jason gave 6 lectures on state-of-the-art multiscale methods, ranging from particle and hydrodynamic techniques for rarefied gas dynamics to hybrid methods for water flows in nanostructured filtration membranes.
Dr Stephen M. Longshaw, Research Fellow, Daresbury Laboratory
Much of the research undertaken within the Micro & nano flows group relies on software to compute new scientific results. There are plenty of examples of this within the history of this blog but one area of increasing importance is that of coupling codes together to solve multi-scale or multi-physics problems with more than one piece of software.
I have talked about coupling in the past but this time I wanted to briefly describe the concept of universal coupling. This idea is gaining traction at the moment within reearch communities around the world as well as major softare vendors, in a nutshell it is the idea of providing a universal interaction layer or glue that can stick together any type of software that solves a scientific problem to make up a larger whole to solve more complex problems than any of the individual components can solve on their own.
In the past I mentioned a number of software frameworks for solving multi-scale/multi-physics problems, one example was the MUSCLE2 library (link) which came from the European MAPPER project, the same consortia are also behind the H2020 funded COMPAT project. The intereseting thing about large solutions like these though is that their use and integration is inherently difficult because of the scale of what they are trying to achieve.
A number of solutions have become apparant that aim to solve the problem of universal coupling in a less intrusive way, in the past I mentioned EDFs PLE wrapper that comes as part of their Code_Saturne CFD software, this uses the concept of data transport at a set of points to allow transfer of data between solutions. The basic premise is that, regardless of the form of a solver (i.e. whether it is mesh based or not or whether it is a continuum solver or not) data can always be sampled at specific points and sampled data can be imparted on another solution from said points. From a software engineering perspective the challenge is not too great, of course like anything, to do it well is always a big challenge, but precedent for methods to achieve this sort of communication framework are well established. The key challange is to ensure loss of simulation fidelity at the point of coupling is either addressed or at least managed.
Primarily the key questions are:
1) How do I sample my solution at a specific point while maintaining the level of accuracy I desire/need (i.e. is it OK to perform a linear interpolation of surrounding cells/other discrete entities or is soemthing else required?)
2) How do I consume information stored in data set at a specific point within my solution (i.e. I know that an external force exists in my simulation domain at point x,y,z because a coupled simulation has told me so, but I have no exact discrete location within my solution that matches this point, is it OK to interpolate a new value from the coupled data and if so, using what method and if not, how do I ovecome this?)
Generic solutions like PLE take the stance that they provide the coupling mechanism but it is up to the individual software developers using it to define how data is imparted and consumed from the points.
A new solution has recently begun take-up within our group that starts to bridge the likes of MUSCLE2 and PLE by working in the simplistic manner of PLE but by being designed to allow developers to easily add their own data storage/impartment methods so the library can grow into a useful code base for many different method types. Originally developed within the Applied Mathematics division at Brown University in the USA, it is called the Multiscale Universal Interface (MUI) and is available to download from GitHub. In some ways, what MUI offers is fairly obvious when you take a step back, however its key strength is that it has been designed in a well-engineered manner to be both extensible and as light-weight as possible in that it is a header only C++ library (that currently provides wrappers for C and Fortran as well). It makes use of MPI for its communications but does so in a way that won't interfere with existing MPI comms, so multiple MPI applications can use MUI to interact.
The library is currently being tested within our group and should it prove a good way forward we will aim to collaboratively expand its current capabilities with the original research team at Brown, with the results making their way back into the software's repository.
In other news, a snippet of the Micro & Nano flows grouop work was recently on display at the 2016 emerging technology (or EMiT) conference at the Barcelona Supercomputing Centre in Spain. This conference looks to provide a platform for those using or developing the latest emerging trends in computing, be that software or hardware. We showed off some of the groups cutting edge work on the IMM method (coupling MD and DSMC) as well as some GPU porting work for our MD code.
Dr Stephen M. Longshaw, Research Fellow, Daresbury Laboratory
On the 30th of June, the Micro & Nano Flows group held their first meeting with industrial partners at the Arden conference centre in Warwick. This SIC review meeting marked the first of many to come as part of the groups recent 5 year £3.4m EPSRC grant (EP/N016602/1) and was designed to allow the areas of industry that had pledged interest in the grant before it started to begin the process of defining their problems and working out collaborations to be formed with the MNF group.
The day saw presentations from many members of the MNF group over three sessions, presenting where we are now in terms of our work on micro & nano flows for engineering applications, where we are going scientifically and the current state of our software and tools as well as our vision for the future such as how we are going to couple our groups efforts together in a homogeneous way.
People from many areas of industry attended, with representation from AkzoNobel; The European Space Agency (ESA); Jaguar Land Rover and Nokia Bell to name a few, each with their own problems at the micro & nano scale.
The idea behind these meetings is that they are held fairly regularly over each year of the grant, this first meeting saw quite a bit of information presented by the MNF group but it still involved a good amount of lively debate from the partners present and really set the tone for an exciting and dynamic set of research partnerships as the next 5 years pan out. The meetings that will follow will take on more of a workshop feel, with those from industry encouraged to bring their biggest research challenges with them so we can start to think around the problems as a group and come up with tangible research goals to solve them.
The event was thoroughly enjoyable (helped along by the meeting finishing with a superb conference dinner served by the Arden conference facility that begun with a tall glass of a certain fruity alcoholic beverage* best enjoyed with tennis while sat on a lawn in the sun), setting a positive and productive tone for the meetings to come.
* Answers as to the name of said drink on a postcard please!
Dr Alex Patronis, Research Fellow, University of Warwick
Use SSHFS to conveniently mount a remote file system on your local machine, all over ssh. You'll be able to perform any operation on the mounted files as if they were stored locally.
Once you've installed SSHFS (available on the Ubuntu repositories or through osxfuse on OS X) create a mount point (I've created a directory called tinis for this purpose):
mkdir -p ~/mnt/tinis
Now go ahead and mount your remote file system:
sshfs -o allow_other email@example.com:/ ~/mnt/tinis
The allow_other option allows non-rooted users have read/write access. You'll now find all of your remote files in ~/mnt/tinis. Copy files to ~/mnt/tinis and they'll be uploaded in the background. Once you're done, unmount using:
Your large simulations will be easier to manage (bonus: try monitoring ongoing simulations locally with gnuplot).