Micro & Nano Flows for Engineering

The micro & nano flows group is a research partnership between the Universities of Warwick and Edinburgh, and Daresbury Laboratory. We investigate gas and liquid flows at the micro and nano scale (where conventional analysis and classical fluid dynamics cannot be applied) using a range of simulation techniques: molecular dynamics,  extended hydrodynamics, stochastic modelling, and hybrid multiscaling. Our aim is to predict and understand these flows by developing methods that combine modelling accuracy with computational efficiency.

Targeted applications all depend on the behaviour of interfaces that divide phases, and include: radical cancer treatments that exploit nano-bubble cavitation; the cooling of high-power electronics through evaporative nano-menisci; nanowire membranes for separating oil and water, e.g. for oil spills; and smart nano-structured surfaces for drag reduction and anti-fouling, with applications to low-emissions aerospace, automotive and marine transport.

 

EPSRC Programme Grant in Nano-Engineered Flow Technologies

Our work is supported by a number of funding sources (see below), including a 5-year EPSRC Programme Grant (2016-2020). This Programme aims to underpin future UK innovation in nano-structured and smart interfaces by delivering a simulation-for-design capability for nano-engineered flow technologies, as well as a better scientific understanding of the critical interfacial fluid dynamics.

We will produce software that a) resolves interfaces down to the molecular scale, and b) spans the scales relevant to the engineering application. As accurate molecular/particle methods are computationally unfeasible at engineering scales, and efficient but conventional fluids models do not capture the important molecular physics, this is a formidable multiscale problem in both time and space. The software we develop will have embedded intelligence that decides dynamically on the correct simulation tools needed at each interface location, for every phase combination, and matches these tools to appropriate computational platforms for maximum efficiency.

This work is strongly supported by nine external partners (see below).

Current Funding

  • “Nano-Engineered Flow Technologies: Simulation for Design across Scale and Phase” EPSRC Programme Grant EP/N016602/1 01/16-12/20 (£3.4M)
  • “The First Open-Source Software for Non-Continuum Flows in Engineering” EPSRC grants: EP/K038427/1 K038621/1 K038664/1 07/13-06/17 (£0.9M)
  • “Multiscale Simulation of Interfacial Dynamics for Breakthrough Nano/Micro-Flow Engineering Applications” ARCHER Leadership Project 11/15-10/17 (£60k in supercomputer computational resource)
  • “Skating on Thin Nanofilms: How Liquid Drops Impact Solids” Leverhulme Research Project Grant 08/16-08/19 (£146k funding a 3-year PDRA)

Partnerships

  • Airbus Group Ltd
  • AkzoNobel
  • Bell Labs
  • European Space Agency
  • Jaguar Land Rover
  • Oxford Biomedical Engineering (BUBBL)
  • TotalSim Ltd
  • Waters Corporation

Loading tweet...

Latest news and blogs

Dr James Sprittles, University of Warwick

Warwick welcomes Vinay Gupta who has started a 2-year Commonwealth Rutherford Fellowship in the Mathematics Institute.  Vinay's background is in exploiting moment methods to describe gas mixtures and granular gases.

David Emerson

Prof. David R Emerson, Daresbury Laboratory

Arnau Miro from the Universitat Politècnica de Catalunya won a HPC Europa-2 grant for a 13-week visit to the Daresbury group. Arnau will be working on advanced meshing and code coupling strategies and starts his visit mid-January.

Dr. Duncan Lockerby

Prof. Duncan Lockerby, University of Warwick

Following an international competition, Jason Reese has been awarded a prestigious Chair in Emerging Technologies by the Royal Academy of Engineering (RAEng). 

These Chairs “identify global research visionaries and provide them with long-term support to lead on developing emerging technology areas with high potential to deliver economic and social benefit to the UK”.

For 10 years from March 2018, Prof Reese will be funded by the RAEng within the Micro & Nano Flows partnership to develop a new platform technology in multi-scale simulation-driven design for industrial innovation and scientific endeavour.

Further info: https://www.raeng.org.uk/news/news-releases/2018/april/academy-funds-global-research-visionaries-to-advan

Dr James Sprittles, University of Warwick

On May 17/18 Rohit and I gave invited talks at the inaugural Surface Wettability Effects on Phase Change Phenomena (SWEP) workshop in Brighton.  This was organised by Joel De Coninck, our first Visiting Scientist of the Programme, and Marco Marengo who are both experts in this field - their hope is that this workshop will become a yearly fixture.  They opened the workshop by reminding the audience of the incredible effects that wettability can have: adding just one layer of molecules to the top of a surface can completely change the shape of mm-sized drops that sit on top of them, which is the equivalent in scale of ants being able to change the shape of mountains (apologies for the poor quality photo)!

Rohit and I gave the last and first talks, respectively, with Rohit impressing the audience with his work on acoustofluidics whilst I spoke about 3 canonical problems involving kinetic effects in interfacial flows, including work with Mykyta (drop impact), Anirudh (drop evaporation) and Duncan.

There were many interesting presentations on a wide range of phase change phenomena.  I particularly enjoyed Carlo Antonini's talk "License to Freeze" which reviewed methods for controlling ice formation on surfaces (including an inverse Leidenfrost effect, where evaporation occurs from the underlying substrate rather than the impacting drop drop, which we could potentially simulate) and Daniel Attinger's talk on "What is the Optimum Wettability of a Pool Boiling Heater?", which carefully explained the experimental and theoretical challenges of understanding the subtle interplay between wettability, phase change and heat transfer driven by bubble formation at a (complex) solid surface.

All in all the workshop was very enjoyable and the level of scientific discussion was high (i.e. Rohit and I got grilled!) - I would recommend it to our group members in future years.

Chengxi Zhao, PhD Student, University of Warwick

My current research focuses on the hydrodynamics fluctuations in nano-jets. The earliest research (Moseler M., Sci. 2000) found new double-cone rupture profiles due to thermal fluctuations (molecule motions), which the Navier-Stokes models failed to predict. Our research shows that these fluctuations not only affect the final rupture profiles but also change the wavelengths of perturbations. 

Moreover, I have found that thermal fluctuation effect widely exists in the nanofluids, especially those with the interfaces. So I summarized some previous research in the figure above and listed the literature (links) below.

(1)Nanojet flows:  
[1.1] Moseler M.,  2000
[1.2] Egger J., 2002
[1.3] Hennequin Y., 2006
[1.4] Kang W., 2007
[1.5] Petit J., 2012
(2)Drop coalescence
[2.1] Dirk G. A., 2004
(3)Fluid mixture
[3.1] Kadau k., 2007
(4) Moving contact lines
[4.1] Perrin H., 2016
[4.2] Belardinelli D., 2016
[4.3] Davidovitch B., 2005
(5) Bubble
[5.1]  Gallo M., 2018
(6) Thin film
[6.1] Grun G., 2005
[6.2] Fetzer R., 2007
[6.3] Diez J. A., 2016

Although the phenomena above is distinct, mathematical models were derived from the same equations, Landau and Lifshitz Navier-Stokes equations (LLNS). What's more, particle methods (MD or DSMC) can be employed to support the new physical models as numerical experiments.
Therefore, there are lots of opportunities for us to employ both math models and simulations to study this interesting topic. 

Stephen Longshaw

Dr Stephen M. Longshaw, Senior Computational Scientist, Daresbury Laboratory

Much of the MNF group's research output has been based around our solvers (mdFoam+ and dsmcFoam+) which are written in the OpenFOAM software framework. OpenFOAM is well known and well acknowedged as a very flexible and stable environment to develop new solvers, however it has a bit of a reputation for scaling badly on big super computers, leaving people to assume it should only be used when your problem can be tackled by a stand-alone workstation or using only a few nodes on your favourite big HPC system. This blog post will talk about the new collated file format introduced into OpenFOAM 5.0 and how it might be the beginning of the end for this mentatility.

The question is, where has this perception come from and, more importantly, is it right? If you search for the issue of OpenFOAM scalability on HPC then you will find numerous articles and topics, what is interesting though is how few are a) looking at massive scalability (most consider running on a few CPUs) b) how few recent articles there. 

The question therefore is whether OpenFOAM actually does perform badly on HPC system or is it an out of date perception. This is a hard one to answer fully as OpenFOAM has been around for a good few decades and has a number of different solvers to consider. In theory, each should parallelise as well as the others as they are all built on top of same basic libraries, however of course some algorithms work better in parallel than others and some of the solvers may not have receieved the same attention as others. Generally speaking though the methods used in OpenFOAM are sound, it employs typical static domain-decomposed non-blocking MPI in most of its solvers and allows well-known decomposition libraries such as Scotch to be used to minimise communication overhead. Undoubtedly this could all be optimised better if it were to receieve lots of attention from the HPC community but are there any other problems blocking this?

The MNF group runs many of its simulations on the UK's national HPC service Archer, run by the EPCC, a Cray XC30 machine. At the moment they provide access to OpenFOAM 4 on their system. Arguably OpenFOAM has a bad reputation for use on this system but the same problems are repeated on many systems, especially those that use a Lustre parallel file system and that is the way that OpenFOAM creates and deals with its files.

For every MPI process created, a new folder is also created and a set of files. In cases where lots of output is created during a simulation this can easily mean there are thousands of files per processor created on disk, Archer provides a hard limit per user on the number of files that can be created in their storage and also that they can have open in memory at any one time, parallel runs using OpenFOAM quickly exceed this and can have a major impact on the parallel file system for other users if the limits wern't there, as a result of this OpenFOAM has developed a bad reputation. It is worth noting that this approach is an entirely valid, if outdated, way of dealing with I/O when using MPI.

The good news is that, as of OpenFOAM 5.0, this has been changed and now there is a new way of writing files to disk known as the collated file format. This is a simple idea, rather than each MPI process creating its own folder, there is now just one set of files written by the master process and all other processes transfer data back via MPI. If you get hold of the latest development version via the OpenFOAM-dev repository then this has been further developed so you can mark individual MPI processes as "master node" writers to spread the load and reduce communication overhead as then processes only need to talk to each other within the same node. Therefore, if you were running on 48 nodes of Archer then you would have 1152 MPI processes with 24 on each node, so you would have 48 sets of files instead of 1152. This is really quite significant as if you assume there are 1000 files per set by the end of a simulation then you have 48,000 rather than 1,152,000!

We have done some basic testing and have found using the new file format to be about 50% faster on Archer using the flow past a motorbike tutorial case with simpleFoam and 48 nodes. 

Of course the really exciting thing about this development is that the HPC community can now really get stuck in to the challenge of properly benchmarking OpenFOAM over many more MPI ranks than it has previousely attempted as cases now scale, this will therefore hopefully lead to rapid development of the underlying MPI approach and only serve to increase performance of OpenFOAM across all of its solvers, including the MNF group codes!