Micro & Nano Flows for Engineering

The micro & nano flows group is a research partnership between the Universities of Warwick and Edinburgh, and Daresbury Laboratory. We investigate gas and liquid flows at the micro and nano scale (where conventional analysis and classical fluid dynamics cannot be applied) using a range of simulation techniques: molecular dynamics,  extended hydrodynamics, stochastic modelling, and hybrid multiscaling. Our aim is to predict and understand these flows by developing methods that combine modelling accuracy with computational efficiency.

Targeted applications all depend on the behaviour of interfaces that divide phases, and include: radical cancer treatments that exploit nano-bubble cavitation; the cooling of high-power electronics through evaporative nano-menisci; nanowire membranes for separating oil and water, e.g. for oil spills; and smart nano-structured surfaces for drag reduction and anti-fouling, with applications to low-emissions aerospace, automotive and marine transport.

 

EPSRC Programme Grant in Nano-Engineered Flow Technologies

Our work is supported by a number of funding sources (see below), including a 5-year EPSRC Programme Grant (2016-2020). This Programme aims to underpin future UK innovation in nano-structured and smart interfaces by delivering a simulation-for-design capability for nano-engineered flow technologies, as well as a better scientific understanding of the critical interfacial fluid dynamics.

We will produce software that a) resolves interfaces down to the molecular scale, and b) spans the scales relevant to the engineering application. As accurate molecular/particle methods are computationally unfeasible at engineering scales, and efficient but conventional fluids models do not capture the important molecular physics, this is a formidable multiscale problem in both time and space. The software we develop will have embedded intelligence that decides dynamically on the correct simulation tools needed at each interface location, for every phase combination, and matches these tools to appropriate computational platforms for maximum efficiency.

This work is strongly supported by nine external partners (see below).

Current Funding

  • “Nano-Engineered Flow Technologies: Simulation for Design across Scale and Phase” EPSRC Programme Grant EP/N016602/1 01/16-12/20 (£3.4M)
  • “The First Open-Source Software for Non-Continuum Flows in Engineering” EPSRC grants: EP/K038427/1 K038621/1 K038664/1 07/13-06/17 (£0.9M)
  • “Multiscale Simulation of Interfacial Dynamics for Breakthrough Nano/Micro-Flow Engineering Applications” ARCHER Leadership Project 11/15-10/17 (£60k in supercomputer computational resource)
  • “Skating on Thin Nanofilms: How Liquid Drops Impact Solids” Leverhulme Research Project Grant 08/16-08/19 (£146k funding a 3-year PDRA)

Partnerships

  • Airbus Group Ltd
  • AkzoNobel
  • Bell Labs
  • European Space Agency
  • Jaguar Land Rover
  • Oxford Biomedical Engineering (BUBBL)
  • TotalSim Ltd
  • Waters Corporation

Loading tweet...

Latest news and blogs

Dr James Sprittles, University of Warwick

Warwick welcomes Vinay Gupta who has started a 2-year Commonwealth Rutherford Fellowship in the Mathematics Institute.  Vinay's background is in exploiting moment methods to describe gas mixtures and granular gases.

David Emerson

Prof. David R Emerson, Daresbury Laboratory

Arnau Miro from the Universitat Politècnica de Catalunya won a HPC Europa-2 grant for a 13-week visit to the Daresbury group. Arnau will be working on advanced meshing and code coupling strategies and starts his visit mid-January.

Dr. Duncan Lockerby

Prof. Duncan Lockerby, University of Warwick

Following an international competition, Jason Reese has been awarded a prestigious Chair in Emerging Technologies by the Royal Academy of Engineering (RAEng). 

These Chairs “identify global research visionaries and provide them with long-term support to lead on developing emerging technology areas with high potential to deliver economic and social benefit to the UK”.

For 10 years from March 2018, Prof Reese will be funded by the RAEng within the Micro & Nano Flows partnership to develop a new platform technology in multi-scale simulation-driven design for industrial innovation and scientific endeavour.

Further info: https://www.raeng.org.uk/news/news-releases/2018/april/academy-funds-global-research-visionaries-to-advan

Stephen Longshaw

Dr Stephen M. Longshaw, Senior Computational Scientist, Daresbury Laboratory

Much of the MNF group's research output has been based around our solvers (mdFoam+ and dsmcFoam+) which are written in the OpenFOAM software framework. OpenFOAM is well known and well acknowedged as a very flexible and stable environment to develop new solvers, however it has a bit of a reputation for scaling badly on big super computers, leaving people to assume it should only be used when your problem can be tackled by a stand-alone workstation or using only a few nodes on your favourite big HPC system. This blog post will talk about the new collated file format introduced into OpenFOAM 5.0 and how it might be the beginning of the end for this mentatility.

The question is, where has this perception come from and, more importantly, is it right? If you search for the issue of OpenFOAM scalability on HPC then you will find numerous articles and topics, what is interesting though is how few are a) looking at massive scalability (most consider running on a few CPUs) b) how few recent articles there. 

The question therefore is whether OpenFOAM actually does perform badly on HPC system or is it an out of date perception. This is a hard one to answer fully as OpenFOAM has been around for a good few decades and has a number of different solvers to consider. In theory, each should parallelise as well as the others as they are all built on top of same basic libraries, however of course some algorithms work better in parallel than others and some of the solvers may not have receieved the same attention as others. Generally speaking though the methods used in OpenFOAM are sound, it employs typical static domain-decomposed non-blocking MPI in most of its solvers and allows well-known decomposition libraries such as Scotch to be used to minimise communication overhead. Undoubtedly this could all be optimised better if it were to receieve lots of attention from the HPC community but are there any other problems blocking this?

The MNF group runs many of its simulations on the UK's national HPC service Archer, run by the EPCC, a Cray XC30 machine. At the moment they provide access to OpenFOAM 4 on their system. Arguably OpenFOAM has a bad reputation for use on this system but the same problems are repeated on many systems, especially those that use a Lustre parallel file system and that is the way that OpenFOAM creates and deals with its files.

For every MPI process created, a new folder is also created and a set of files. In cases where lots of output is created during a simulation this can easily mean there are thousands of files per processor created on disk, Archer provides a hard limit per user on the number of files that can be created in their storage and also that they can have open in memory at any one time, parallel runs using OpenFOAM quickly exceed this and can have a major impact on the parallel file system for other users if the limits wern't there, as a result of this OpenFOAM has developed a bad reputation. It is worth noting that this approach is an entirely valid, if outdated, way of dealing with I/O when using MPI.

The good news is that, as of OpenFOAM 5.0, this has been changed and now there is a new way of writing files to disk known as the collated file format. This is a simple idea, rather than each MPI process creating its own folder, there is now just one set of files written by the master process and all other processes transfer data back via MPI. If you get hold of the latest development version via the OpenFOAM-dev repository then this has been further developed so you can mark individual MPI processes as "master node" writers to spread the load and reduce communication overhead as then processes only need to talk to each other within the same node. Therefore, if you were running on 48 nodes of Archer then you would have 1152 MPI processes with 24 on each node, so you would have 48 sets of files instead of 1152. This is really quite significant as if you assume there are 1000 files per set by the end of a simulation then you have 48,000 rather than 1,152,000!

We have done some basic testing and have found using the new file format to be about 50% faster on Archer using the flow past a motorbike tutorial case with simpleFoam and 48 nodes. 

Of course the really exciting thing about this development is that the HPC community can now really get stuck in to the challenge of properly benchmarking OpenFOAM over many more MPI ranks than it has previousely attempted as cases now scale, this will therefore hopefully lead to rapid development of the underlying MPI approach and only serve to increase performance of OpenFOAM across all of its solvers, including the MNF group codes!

Duncan Dockar, PhD Student, University of Edinburgh

Boiling is an important feature in many engineering processes, such as in the steam cycle in many power plants. It is also a highly multiscale phenomenon, with boiling bubbles nucleating on nanoscale features of solid substrates and growing to sizes of the order of mm. Researchers at Massachusetts Institute of Technology (MIT) have demonstrated the drastic effects that alterations at the nanoscale can have on boiling nucleation with the use of surfactants. By applying a voltage to certain parts of a substrate, the surfactants effectively render the substrate hydrophobic and can rapidly induce boiling at very specific regions of the substrate.

The rate at which boiling can be switched on or off is also particularly impressive, although as we all know the ultimate test for how fast scientists can control a process is by syncing it up with classical music… MIT calls this piece “Ode to Bubbles”, enjoy!

Cho, H. J. et al. Turning bubbles on and off during boiling using charged surfactants. Nat. Commun. 6:8599 doi: 10.1038/ncomms9599 (2015)

Livio Gibelli, Research Fellow, University of Warwick

On July 5/9, I took part to the 12th AIMS Conference on Dynamical Systems, Differential Equations and Applications in Taipei.
This conference aims at fostering and enhancing interactions among mathematicians and scientist in general. It has featured 135 special sessions with a broad range of topics. Keynote lectures were given by famous mathematicians (A. Buffa, V. Calvez, S. Peng, J. Ball, just to cite a few).

AIMS Conference schedule at a glance

I was in particular interested to two special sessions devoted to kinetic theory: "Models and Numerical Methods in Kinetic Theory" (where I was invited to give a talk) and "Kinetic and Related Equations: Collisions, Mean Field, Organized Motion".
Although kinetic equations have been traditionally applied to rarefied gas dynamics and plasma physics, these special sessions have confirmed an emerging trend in kinetic theory, namely the use of its theoretical framework to study topics in fields which are apparently far from fluid dynamics like the emergence of organized collective behaviour in vehicular traffic, crowds, swarms, social systems and biology. This wide range of new applications and the benefits that these studies can potentially bring to the society has significantly revived the interest in kinetic theory.
A detailed description of sessions along with the abstracts of presented talk can be found at the conference's website (http://aimsciences.org/conferences/2018/).

Overall, I was pleased to partecipate to the conference. The only negative aspect was that typhoon Maria stroke Taipei the day I had to flight back home. As a consequence, my flight was delayed and I was forced to spend almost two days segregated in hotel!