HDF5 and H5MD

HDF5

HDF5 is a binary file format and a software library for management of large and complex data sets. The development of the library was initiated in 1987 at the National Centre for Supercomputing Applications at the University of Illinois at Urbana-Champaign. Currently, the software library is supported and developed by the not-for-profit company HDF Group.

The software library provides high-level APIs written in C, C++, Fortran 90 and Java. HDF5 includes utilities for data slicing, data compression and parallel I/O. Bindings to HDF5 are available for MathematicaMATLAB, Python and other engineering and scientific software.

HDF5 has a long history of applications in CFD and other fields of science. For example,

HDF5 is distributed under the terms of an open source license.

H5MD

H5MD is a file format specification for efficient and portable storage of molecular data. The specification was developed in an attempt to simplify the exchange of data between different analysis and simulation software.

The description of the file format was published in the journal Computer Physics Communications in 2014 [1]. Currently, the specification of H5MD is maintained in the form of an open source project at a git repository. Software utilities for management of H5MD are available in the form of C and Python libraries.

Software packages for integration with H5MD are available for several molecular dynamics programs, including LAMMPS and VMD.

External Links

 

 

 

 

 

 

References

[1] de Buyl P, Colberg PH, Höfling F. H5MD: A structured, efficient, and portable file format for molecular data. Computer Physics Communications. 2014 Jun;185(6):1546–53.

What do we mean by "multiscale" and "non-continuum" flows?

Duncan Lockerby and I are leading a “Special Interest Group” (SIG) within the EPSRC-funded UK Fluids Network. The name of this SIG is “Multiscale and Non-Continuum Flows”. 

In practice, this means we are funded to arrange two meetings annually, for the next three years, to bring together UK researchers in multiscale and non-continuum flows with the aim of building links among the community that lead to joint projects in the future. Duncan and I both work on theoretical and numerical fluid dynamics, so we are very aware that in these SIG meetings we need to link with experimentalists and industrialists too.

We had our first SIG meeting in Edinburgh in early May this year, with more than 35 participants from around the UK. However, I don’t want to write a report on that here …

Instead, I found that this SIG meeting gave me the opportunity to think about what we actually mean by “multiscale” and “non-continuum” in the context of flows - and how this may be different from how other fluid dynamicists understand the terms. After all, in order to explain our work to others, we need to agree on and be clear in our own use of language.

So I thought I would jot down in this blog some general ideas from my perspective, which could form the basis for discussion.

In non-continuum flows there are not enough discrete fluid material elements, i.e. particles or molecules, for a fluid element (that is small compared to the scale of the process) to be assumed continuous and indefinitely divisible. While the mathematical models of continuum fluid mechanics may therefore not be applicable, the language of fluid mechanics can still in some cases provide us with a useful shorthand that highlights the key challenges in the physics of these flows. What do I mean by this? For instance, the “enhancement factor” is frequently used to describe the performance of nanotubes in membrane-type applications. It is the ratio of the measured fluid flowrate to the no-slip Hagen-Poiseuille (pipe) theoretical prediction. Nanotubes can show enhancement factors of several 100s or 1000s, which indicate how different the non-continuum water flow in the nanotubes is from the flow in a conventional pipe.

Fluid dynamic models can be modified to include some of the features of non-continuum flow behaviour, but to what extent is this helpful? For water flow in nanotubes, molecular pre-simulations can provide constitutive and boundary data that means some non-continuum features, like slip, can be captured in a fluid dynamic model. But some cannot - for example, the molecular layering that leads to density oscillations close to surfaces at the nanoscale. So we have to be very careful how far we can push a continuum fluid model for a flow application that is strictly non-continuum. (Borg, Reese (2017) MRS Bulletin 42:294-299; Holland, Borg, Lockerby, Reese (2015) Comput. Fluids 115:46-53)

The terms “non-equilibrium” and “non-continuum” often seem to be used interchangeably in the fluid dynamics literature. But I don’t think they are the same thing. In some cases, e.g. molecular dynamics simulations, non-equilibrium simply means the fluid is moving or flowing. In our context, it should more accurately mean that while there may be enough molecules for averaging over fluid elements, they do not collide often enough in flow timescales to ensure local thermodynamic equilibrium. This leads to a breakdown of the conventional linear constitutive relations and no-slip boundary conditions. Non-equilibrium is most often seen in gas flows at the microscale, or in high-speed or high-altitude aerodynamics. (Lockerby, Patronis, Borg, Reese (2015) J. Comput. Phys. 284:261-272)

So what are multiscale flows? “Multiscale” describes an analysis where we identify different models for distinct components of the flow. Turbulence is clearly multiscale in nature, but the scales are inextricable. A multiscale analysis requires some separation in the space and time scales of the effects being modelled, and a consequent simplification of the scale interactions. So, for example, parametric models (e.g. Newton’s law of viscosity, slip conditions) can be regarded as a type of multiscale analysis. Another type are hybrid models: in one small part of the flow domain there are fine flow details that are modelled in one way, and in some separate (larger) region the details are much coarser and modelled differently. (Borg, Lockerby, Reese (2015) J. Fluid Mech. 768:388-414)

If you have a different or complementary perspective on these issues, come along and share your views at the next SIG meeting at the end of September in Warwick University. You are very welcome! (Please email either Duncan or me for more details.)

Ultrasonic dryers could shake your clothes dry

Researchers at the Oak Ridge National Laboratory, USA, have developed a clothes dryer that uses ultrasound transducers to dry water from clothes. Conventional dryers simply heat the wet clothes to evaporate water, a method which has largely been unchanged for decades. The prototype ultrasound dryers can dry laundry in around half the time of the conventional dryers, with an estimated 70% increased efficiency. The ultrasound method has also been found to reduce the risk of clothes shrinkage, colour fade and lint build up.

The technology has obvious implications for our own research here at the Micro and Nano Flows for Engineering group, where the unique fluid behaviour at decreasing length scales could lead to novel and often unintuitive engineering solutions.

Read more here

Workshop: Recent developments in the Kinetic theory of gases

Last week, it was our immense pleasure to host Prof. Henning Struchtrup from University of Victoria (Canada). Henning stayed with us for the whole week sharing his recent research endeavours, stimulating discussions and wisdom.

On July 27th, we hosted a small workshop (Recent developments in the Kinetic theory of gases), which was designed to bring together members of the group (in Warwick and nearby) to explore recent trends and developments in kinetic theory of gases and related areas such as:
(a) numerical methods for the Boltzmann equation
(b) approximation methods in kinetic theory
(c) model reduction and coarse-graining approaches in statistical mechanics
(d) dynamics and phase transitions in liquids.

What actually transpired this workshop was a long, and very interesting discussion on the Reaserch Gate (the Facebook of us academicians) between Prof. Gorban, Dr. Lei Wu and others, myself included. We proposed to organise this workshop in order to facilitate such discussion in an ordinarily manner. We saw eight talks during this workshop, including four talks from the members of MNF@Warwick about their current work. 

There were also four invited talks from: 

  1. Prof. Henning Struchtrup (Macroscopic Modelling of Rarefied and Vacuum Gas Flows)
  2. Prof. Alexander Gorban (Hydrodynamic manifolds for kinetic equations)
  3. Dr Lei Wu (Efficient numerical methods for gas kinetic equations and their applications)
  4. Prof David Emerson (Successes Using the Method of Moments for Rarefied Phenomena and its Future Development).

We summed up the day with a dinner at The Queen & Castle (Kenilworth) in the company of the speakers.

 

Practical Multi-Scale Code Coupling?

This month I thought I would use this blog space to dump some thoughts and musings on the practical aspects of multi-scale coupling that have come to the light the more I talk with people from various scientific discplines looking to achieve this in some way or other! 

Clearly, a group like this one has a fundamental interest in problems at different scales. Arguably, this means a fundamental interest in code coupling as it is unlikely a single software framework or computational method will capture physics at very different length- and time-scales. The same can be said for many other areas of science, not just engineering. So we know that multi-scale, coupled simulation is important, what about when we actually try to do it?

I have talked on this blog in the past about some coupling software approaches, one in particular, the Multiscale Universal Interface (https://doi.org/10.1016/j.jcp.2015.05.004) or MUI. This uses MPI to transfer data between solvers in order to enable code coupling and provides an extensible framework in which to build spatial or temporal interpolation schemes to allow data sampling between dissimilar methods. Other frameworks exist (i.e. OpenPALM, MUSCLE-2, CPL) that provide similar functionality. The interesting point though is what multi-scale coupling actually means. 

When people thinking of coupling existing solvers, they tend to initially imagine some sort of domain-decomposed solution, two solvers operate on their domain independantly (note: these could be fully overlapped or adjacent with an overlapping region) and then each domain transfers data to the other. For those starting with the raw mathematics, they often look to remove seperation between methods where they can in order to simplify the problems (as a good mathematician should!) There are numerous examples of literature out there on what types of classification these couplings fall into, tight, loose, monolithic etc. 

The interesting thing for me though is that when we talk about multi-scale coupling, often the first option is the most likely as when one approaches the task of coupling Molecular Dynamics to complex 3-D Computational Fluid Dynamics, does one want to create a full CFD and MD solver? No, one does not! Clearly there are no hard and fast rules, but more often than not, complex multi-scale problems seem to fit this pattern.

So we know that we are likely coupling two existing solvers together, we know that we can use coupling software like MUI to glue them together, so we know that we have at least two seperate domains to deal with. This is where a problem comes in when dealing with practical engineering applications and the question that needs answering (and won't be in this blog because it' an open question):

"If we have two seperate domains at length scales so different it is considered multi-scale, how can we reasonably couple them in any physical sense? Indeed, should we even be realistically trying to achieve this?"

There is a caveat to this statement: the situation where one end of the coupled length scale spectrum only aims to provide (or receive) a single answer to the other. For example if we wish to define the rheology of a liquid in a macroscopic CFD simulation using MD, then this could be simplified to saying that we wish to define a parameter for whatever viscosity formulation the CFD uses, using MD. Clearly there is a huge complexity in doing this but it is tractable as the MD simulation is not trying to recreate a physical part of the CFD domain, a typical MD "periodic box" scenario may well be enough to derive a single macroscopic value.

I'm not going to provide an answer to the question here as I think it's an open question but just to highlight what I mean. This group has done some really cool work over the past 5 or so years on coupled simulation of difficult micro and nano scale problems where typical CFD solvers would simply fail because Navier-Stokes doesn't capture the physics correctly, such as flow of water through a carbon nanotube. In these we have augmented the computational domain of an intentionally simple CFD solver with either MD, or direct simulation Monte Carlo (DSMC) sub-domains.

There are plenty of references available through the publications section of this site, but this has meant simulating flow through nano- or micro-channels with a high-aspect ratio that would normally be achieved with weeks of MD simulation have been tackled in hours or days. However, this is a very specific case and arguably, isn't multi-scale as both domains are of the same length-scales.

In a nutshell, what do we do when we want to simulate a portion of a truly macroscopic domain (i.e. of the order of metres or even centimetres or millimetres) using a method appropriate to the nano-scale and we want to not have died of old age by the end of the calculations and we want to re-use existing solvers. Answers on a post-card please!

Enhanced liquid slippage over gas nanofilms

Our recent research article has recently been accepted for publication in Physical Review Fluids. Our main results show that shear flow of water over thin gas nanofilms entrapped on a surface produces larger-than-expected local slip, with the Knudsen number of the gas playing a significant role. Figure 1 shows a snap shot of the molecular dynamics simulation which we setup to measure the hydrodynamic slip length ‘b’ of water flowing over nitrogen gas, with slip defined at the reference y = 0. Figure 2 shows our main results: we plot values for the original theoretical gas cushion model (GCM) that does not include any rarefaction effects, our molecular dynamics results in symbols, and the proposed theoretical model from kinetic theory – which we have called the rarefied gas cushion model (r-GCM). Insight from these results could possibly help design future self-cleaning surfaces or drag-reducing/anti fouling marine coatings.

 

 

Capillary breakup of armored liquid filaments

Here is an article on the capillary breakup of armored liquid filaments, which are liquid columns wherein superhydrophobic particles reside on the liquid-air interface rather than in the bulk of the filament. The authors (Zou, Lin and Ji) conducted experiments using a high-speed camera to analyse the effects of the interfacial powder coverage on the filament breakup dynamics for a variety of powder sizes and how this related to a control experiment using 'pure liquid', for which there are established power-laws governing the minumum filament radius at points in time before pinch-off.

It was found that the thinning process for the filament can be split into three stages: (i) the armored liquid stage, (ii) the transition stage, and (iii) the liquid stage. The bulk of the work is on the dynamics of the armored liquid stage, in which the filament thins uniformly with an increased effective surface tension owing to the presence of the powder, and so maintains a larger minimum filament radius than its powder-less counterpart. The authors have found their own scaling law that governs the minimum filament radius during this stage and have a established a model that well approximates experiments with well-reasoned assumptions on the geometry of the system. When the minimum filament radius approaches the order of the average particle radius, the transition stage begins as the particles cause local deformation of the interface, increasing curvature and accelerating the thinning process. This continues until a time in which thinning can be modelled using power laws for 'pure liquids' in the final stages of thinning before capillary breakup, known as the liquid stage.

Successful ARCHER RAP project grant

Benzi John (PI) and David Emerson (Co-I) have successfully won about 32, 000 KAU (~ £18,000) frunding from EPSRC to run large-scale simulations on ARCHER. The project titled "High fidelity non-equilibrium DSMC flow simulations at scale using SPARTA" will run for 12 months, starting from August 1st 2017. 

ARCHER is the UK's national supercomputing service based around a Cray XC30 supercomputer. EPSRC offers access to ARCHER through calls for proposals to the Resource Allocation Panel (RAP) through which users can request significant amounts of computing resource. The main aim of our project is to investigate the potential for carrying out large-scale rarefied gaseous flow problems using SPARTA. SPARTA is an exascale-capable, open-source code very recently developed at Sandia National Laboratories, designed to work efficiently on massively parallel computers. Over the course of this project, we intend to carry out selected large scale DSMC simulations related to studying a) supersonic gas flow dynamics in the low-pressure regions of a mass spectrometer, b) droplet evaporation and c) aerodynamics of flow past bluff bodies like cylinder. We hope that HPC computing resources available on ARCHER, together with the codes' scaling capabilities, will enable us to carry out the necessary testing, benchmarking, and parallel simulations at a high-fidelity scale.

Marangoni flow of Binary Mixtures on a Liquid Layer

Here is an interesting article about Marangoni Bursting. This is the link of the Youtube video. In this work, researchers focused on the instability observed when a two-component drop of water and volatile alcohol was deposited on a bath of sunflower oil. The drop temporarily spread and spontaneously broke up into thousands of tiny droplets. They also demonstrated that Marangoni flows induced by the evaporation of alcohol played a key role in the overall phenomenon.

Transport of rarefied gas in shales or tight reservoirs: Modeling using macroscopic moment-based equations

The ever-increasing world’s energy demand represents a significant challenge for suppliers.  For the oil and gas sector, responding to this demand has entailed the exploitation of so-called unconventional resources. Extracting hydrocarbons from these reservoirs may require the use of high-end, expensive technology. The Alberta Energy Regulator (AER) in Alberta, Canada, writes that "unconventional oil and natural gas—shale gas in particular—has been called the future of gas supply in North America. (...) It has tremendous economic potential and we know that the interest in these considerable resources will increase".

Natural gas from shales or tight reservoirs is among the unconventional energy sources that are being targeted by the industry. According to a report from the U.S. Energy Information Administration published in 2014, natural gas production from shales increased from 4% of the total gas production in 2005 to 40% in 2012.  This report also indicates that shale gas production is expected to rise to 53% in 2040.  Shales commonly consist of mineral (inorganic) regions with micrometer pore size surrounding regions of solid organic porous material with pore size in the nanometer range.  Gas appears either free in the pore space or absorbed in the walls of the organic matrix. Typical conditions in shales are such that the gas molecules mean free path is also in the nanometer range. Therefore, inside the organic porous material, the gas Knudsen number can be in the range of 0.01 to 10, signalling the presence of rarefied effects such as slippage and molecular collisions with the pore walls.

For the purpose of project decision-making and reservoir management, bounding the uncertainty associated with reservoir modelling is crucial, especially in times of relatively low prices, such as the present ones. Besides somewhat fundamental investigations modelling the transport processes at the pore-scale using, for instance, the Lattice -Boltzmann method to solve for the equations of motion or molecular dynamics simulations, employing in many cases a computational domain constructed from detailed images of the actual porous material, from the point of view of reservoir engineering and simulation, the most practical approach is the development of macroscopic models for the apparent permeability of the porous media.

With regard to macro-scale modelling, perhaps the approach that has become the most popular is based on the simple superposition of the Knudsen diffusion model for the rarefied effects and the Poiseuille model for viscous flow in conduits. This renders the so-called “Dusty Gas Model”.  Another approach is to model rarefied effects by adding to the Poiseuille-flow term a term accounting for gas-wall interactions by means of the tangential momentum accommodation coefficient (TMAC).

Even though continuum models for rarefied gas dynamics based on moments such as Grad's or the regularized 13-moment method (R13) have been applied to study gas transport in channels and circular conduits, it has come as a surprised that almost nothing has been done in terms of the application of these results to modelling shale gas transport. We found only the very recent work of Kazemi and Takbiri-Borujeni from 2015 and published in the International Journal of Coal Geology and entitled "An analytical model for shale gas permeability".

In this work, they constructed a model for the apparent permeability of shales by using the expression for the flux of rarefied gas in a channel obtained by Taheri, Torrilhon, and Struchtrup (2009) using the R13 equations. Kazemi and Takbiri-Borujeni compared predictions from their model with published data of apparent permeability versus pressure from core plug experiments performed on a Marcellus shale sample. This comparison shows very good agreement.

Considering that they used the model for flow of rarefied gas in a two-dimensional channel, we certainly believe that there is room for improvement within this framework by adopting analytical results from spatial configurations that better resemble the actual pore-to-pore conduits.

Water desalination using a graphene-oxide membrane

Scientists from the University of Manchester have successfully developed a graphene-oxide, laminate membrane capable of removing up to 97% of NaCl ions from salt-water. Their findings, published in Nature Nanotechnology, demonstrate a method for modifying the interlayer spacing between graphene-oxide sheets for “tunable ion sieving”.

Their research also corresponds well with our own work at the Micro & Nano Flows for Engineering Group. Dr Matthew Borg and Professor Jason Reese have recently published an article in the MRS Bulletin of an overview of their work in multiscale modelling of water transport through high aspect-ratio carbon nanotubes. Both articles highlight some important applications of micro and nanoscale fluid research, namely providing fresh drinking water for water-scarce areas of the world.

Read more here.

Water transport through nanotubes using multiscale simulation

Our latest research article has now appeared in the special issue of the MRS Bulletin, and can be downloaded from here. This article reviews some of the sequential and concurrent multiscale methods we developed over the past couple of years for dealing with water transport through high-aspect-ratio nanotubes embedded in membranes. Our results have demonstrated that (a) multiscale methods can actually be applied to far-reaching engineering problems (rather than just tested on simple canonical problems such as Couette flow), and (b) these methods finally offer a unique and economical computational solution that as a result are now being used to shed light on typically-conflicting experimental results for flows through aligned nanotube membranes. 

One of our lucky simulations - water transport through a 1 nm diameter carbon nanotube - has also made it to the front cover of the issue (see above).   

Unconventional applications of evaporation

  • Energy extracted from evaporation can be used for propulsion (link):

  • Evaporation can affect the response of a human eye to thermal disturbances: link.
  • Evaporation can affect the performance of bioreactors: link.

Engaging with the public

It's always intriguing to see how other people view your research. According to the University of Warwick's graphics team, this is what I do:

The GIF shows that when a liquid spreads over a solid it must displace a microscopic air film whose height is comparable to the mean free path in the gas.  Incorporating this physics into a dynamic wetting framework, by solving the Boltzmann equation, is the subject of my recent article in Physical Review Letters.

Along with the Letter, Warwick put out a short Press Release whose success can be tracked using Altmetric, a citation score for press coverage.  I've also just written a short piece for The Conversation.

Writing these articles aimed at the public has been challenging as a lot of rigour/citation/humbleness is removed by the Editors, but I hope it will at least attract some attention to our Group's work in multiscale and multiphysics fluid dynamics.

Shrinking instabilities in toroidal droplets

Here is an interesting article (and video) on the shrinking instability of toroidal droplets. Toroidal droplets are inherently unstable due to surface tension and seek to minimise their surface area for a given volume: i.e. they want to transform into spherical droplets. Using PIV, they experimentally determine the internal flow field as the droplets shrink and observe that the cross-sections significantly deviate from circular during the process, flattening in the inside regions of the torus. By measuring the experimental velocities at the droplet boundary, which have both tangential and radial components, these observations are then accounted for by theoretically solving the Stokes equations using the stream function in toroidal coordinates.

 

 

 

 

 

 

 

 

 

 

 

Freezing droplets with superhydrophobic powder

Check out this video showing how liquid droplets, upon reaching some critical velocity, can have their shape 'frozen' upon impacting a solid surface coated with a superhydrophobic powder. These experiments were carried out by J. Marston et. al. at the King Abdullah University of Science and Technology, who have written multiple articles discussing the results of their experiments.

 

 

 

 

 

 

 

 

 

 

 

TOP500

Here is an interesting article on the most prominent trends in the world of high performance computing in the last one year, discussing topics from machine learning to new processors to exascale.
https://www.top500.org/news/hpc-in-2016-hits-and-misses/

And here is the latest TOP500 supercomputer list published in November 2016, for those not already aware of it.
https://www.top500.org/lists/2016/11/

This lists and ranks the 500 most powerful computer systems in the world. The first list goes back to 1993. TOP500 publishes an updated list of the supercomputers twice a year - the first happens in summer during the International Supercomputing Conference (ISC) in Germany, and the second one is presented  at the Supercomputing Conference (SC) in November each year.

Liquid droplets for reducing wear

Checkout the video demostrating the use of liquid droplets to avoid wear and material failure that is typically associated in systems with solid connections. Please follow the link below for more information.
http://news.mit.edu/2016/movable-microplatform-floats-sea-droplets-1216

For astronauts needing that extra Buzz

A unique application of capillary flow allows workers aboard the International Space Station (ISS) to sip their coffee as easily as they would back on Earth. Designed by NASA astronaut Don Pettit, the cup makes use of a narrow, angled channel to draw liquid up by capillary action. This wetting effect is also unhampered by gravity on the ISS, allowing for a satisfying and uninterrupted drink.

 

Although it seems like just a bit of fun, the technology is already in use for fluid transport in weightless environments, such as in fuel tanks for rockets, micro-gravity condensing heat exchangers and multiphase fluid control. The unique behaviour of fluids in microgravity is especially relevant to our own research at the Micro & Nano Flows group and hopefully in time we will see many more interesting devices driven by unconventional fluid effects.

 

For those interested, a replica cup is available if you wish to drink like an astronaut, although with the addition of a stable, flat base for when gravity gets you down.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Read more here

Can we improve the peer review system?

This interesting article by Prof Raymond Goldstein (Cambridge) explains the common impediments we sometimes encounter in peer review publication, and perhaps a possible solution. 

To read this article, please click here.

Source http://physicstoday.scitation.org/doi/full/10.1063/PT.3.3378; NICK D. KIM, SCIENCEANDINK.COM

 

 

 

 

 

 

 

 

 

 

Figure source: http://physicstoday.scitation.org/doi/full/10.1063/PT.3.3378; & NICK D. KIM, SCIENCEANDINK.COM

Micro & Nano Flows December 2016 Workshop

I thought I would just put up a quick post about the recent December workshop hosted and run by the Micro & Nano Flows group, this was held in North Yorkshire in Ripon (chosen as a uniquely central location for all of the members of the MNF group) on the 12th and 13th of December.

This meeting had multiple purposes and took place over 2 (very) full days! The first day was an EPSRC Creativity@Home day, which was designed to bring together all of the members of the MNF group and to encourage team working/building/thinking. It's easy to dismiss this sort of thing as laughable or useless but in reality companies spend huge amounts because it actually works! The MNF group didn't have a corporate budget but nonetheless a genuinely useful day was arranged by the team from Warwick University.

We started with a team based excercise looking at how to create a successful bid for academic funding. This was done in small teams of 3 or 4 and spanned over 3 hour long sessions, puncutated with appropriate lectures from experienced members of the group. The first hour was spent coming up with good ideas, the second fleshing them out and the third preparing a 5 minute presentation. The general standard was fantastic with a real range of projects proposed. To make things interesting two winners were selected and those will go on to write a full proposal in the New Year to bid for actual funding! For those early in their research career the event provided genuine insight into what makes a good proposal and more importantly how a little flair when presenting can make all the difference.

Later in the day we all attended a local events company based at a farm! I was sceptical... but in fact it was a fantastic event, split into 4 teams we each did 6 tasks that required varying levels of team work but were still fun. Everything from racing two tractors along a course to herding some sheep around a field (without a dog!) It was genuinely fun and got everybody talking and working together.

The second day was a more traditional conference which saw 21 presentations given over 3 sessions with the event named "Multiscale Fluid Dynamics: Simulation, Experiments and Applications". These were mostly from members of the MNF group about their current work but there were also a few invited talks from different outside groups of relavence. This is something that the MNF group traditionally does at the end of each year and is always a great way to get an insight into the work done across the group. We were also lucky to be joined by a number of great keynote speakers including Prof. Yonghao Zhang from the University of Strathclyde who talked about "Modelling gas transport in shales" and also the first of our visiting scientists Prof. Joël De Coninck from the Université de Mons who gave a reslly great talk on "Heat transfer and wettability".

As is traditional, we finished up with a great Christmas meal, for some it was their first experience of such a thing in the UK, hopefully it was a good one!

Events like this are imperative to keeping everybody within the MNF group talking and not locked away in their office and this was probably the most successful that I have attended in my short time as part of the group. Bring on 2018!

Bright future for research on nanoflows in porous media

Current oil prices may be high enough to trigger a re-birth of shale gas production. This also may have a positive impact on research funding for modeling and simulation of gas flow in nanopores.  In shales, the typical pore size is in the order of 10 nanometers. Read this link.

http://www.oilandgaspress.com/u-s-shale-is-now-cash-flow-neutral/

 

My first three months at Warwick

So it's been three months since I arrived in the UK and started my work at Warwick. The first week or so was spent choosing the first project to work on. While the grant I am paid from is, formally at least, for studying drop impact on a solid surface, it makes sense, especially for a newbie like myself, to start with something simpler that still includes the same ingredients: a free-surface liquid flow, a non-equilibrium gas flow and their interactions. So after a few discussions with James and Duncan, I decided to start with collisions between drops, still a remarkably rich problem with some open questions. Next I had to choose a computational tool to use. As always, you can write your own code, which gives you maximum flexibility but takes time. Or you can use something written by others, which gives you an easy start, but you may well encounter serious obstacles later on. As I was eager to start quickly, I decided that I would rather try the latter possibility and settled on FreeFem++, finite element software written by F. Hecht from Paris. It has a built-in mesh generator with the possibility of anisotropic mesh adaptation, several linear algebra solvers, and you just need to specify your PDE in the weak form -- FreeFem++ generates the corresponding set of finite-element equations, solves it and produces graphical output for you. So a simple problem like droplet oscillation takes just a dozen lines or so in the built-in scripting language. Of course, once you do something more complicated, it becomes increasingly hard to use, so, unsurprisingly, by now my code is much longer than that and I am likely to decide to switch to something else eventually. But I don't regret starting with FreeFem++ -- it has been a good learning experience.

Being a novice, I also need to learn more about my research area. So I appreciated the opportunity to attend a workshop on drop impact at Imperial College where James gave a talk. There was a nice variety of talks (experimental, theoretical and computational) on both the basic drop impact problem and its complications (e.g., impact on soft, porous and moving surfaces), as well as other related problems (water entry by a solid, dynamic wetting, drop evaporation). It was interesting to meet Jie Li from Cambridge who has recently developed a computational model of drop collision -- hopefully, we will be able to improve and extend his study.

There have been some learning opportunities at Warwick as well. I particularly appreciate the number of interesting seminars here -- I usually attend three or four a week, sometimes more, in maths, physics and engineering. We have also started weekly group meetings. I am formally in charge, although this does not involve much more than sending reminders to everybody a couple of days in advance. Many new people have joined the group recently and so the goal for this term is to get to know each other. With this in mind we take turns giving informal talks (sometimes very informal!) about our past experiences and current projects. The discussions are lively and a great way to learn, exercise our brains and have fun.

It's not all just work here, of course. Our trip to Wales has already been described in this blog by Dave. As for me personally, I have been able to indulge in my favourite pastime of orienteering, taking part in races in the streets and alleys of Oxford and the town of Warwick, sand dunes of Anglesey and Cumbria, and a Lake District fell.

Water in carbon nanotubes

Three months and counting....

    It has been over three months since I became a part of Mathematics Institute at University of Warwick and this Micro Nano Flows for Engineering group--what a very pleasant and learning experience. 
    I had the opportunity to visit the Nokia Bell Labs in Dublin (with a legacy of eight Nobel prizes) with Dr. James Sprittles and Prof. Duncan Lockerby to explore start-of-the-art physical understanding of liquid-vapor phase transitions. It was a great educational experience how various surface structures such as nanopores can be used to manipulate these processes to improve heat transfer performance. Thanks to our host Dr. Ryan Enright at Nokia Bell Labs (and of course Duncan and James), I get to explore some nice steakhouse, restaurants and pubs in Dublin under stars. 
    The visit from Prof. David Emerson from Daresbury Laboratory was inspiring. 
    Last week, it was our pleasure to host Prof. Manuel Torrilhon from RWTH Aachen in Germany for two days (16-17 November) at University of Warwick. On 16th, after some stimulating discussions and brain storming session with our group at University of Warwick, Prof. Torrilhon shared his point of view on "Modelling of Non-Equilibrium Gas Flows Based on Moment Equations". Below is a picture of Manuel working on fundamental solutions: 

SuperComputing 2016 Day 4 (Thursday)

Well, this is going to be my last entry from SuperComputing 2016. There are a number of technical sessions happening tomorrow morning but this evening officially marks the end of the conference itself and I'm flying home early in the afternoon! 

As I went through the exhibitors hall to find a quiet spot to write this up a little while ago it was odd, everything that took nearly 2 days to set up on Sunday and Monday was nearly gone, even the carpet that somehow got laid across the entire 55,000 Sq/ft hall in under 24 hours was mostly rolled back up again, amazing really. 

Anyway, today there were a number of really good technical sessions, this morning saw Preeti Malakar of Argonne National Lab talking about their software library/layer called FOCAL. Effectively they are interested in optimising the transfer of data in situations where a simulation requires post-processing (or analysis as they called it) to glean useful informaton. Their examples were mainly from Molecular Dynamics performed with LLAMPS but their work applies to anything with the goal being to find the optimal frequency to transfer data for analysis when applications reside in different parts of a distributed system. They showed some great speed-ups, the general approach and download link for FOCAL can be seen in the following slide:

Later I attended the second of the prestigious ACM Gordon Bell finalist papers. This session was looking at climate modelling at extreme HPC scales. One of the talks came from Chao Yang of the Chinese Academy of Sciences and showed their work on re-designing a fully-implicit solver for non-hydrostatic atmospheric dynamics to work across over 10 million cores on the worlds fastest supercomputer, the Sunway TaihuLight. This is a radically different type of computer which has many "small" cores (a little like the IBM BlueGene/Q but on a much bigger scale).

They have produced both an explicit and fully implicit version of their solver and while the explicit solver scales amazingly well on the hardware it is the fact that the implicit code has been made to scale nearly as well and to the level of over 10M cores that is truly outstanding. I still haven't quite digested what they have done technically but it is described in the following slide (for the record... they won the prize!):

It's been a very interesting week. If I had one comment then it would be that in many cases I feel descriptions of scientific applications has been a little thin on the ground, often covered in a single slide. However as the focus here is the computation I think I can probably let that slide! I have been amazed at the scale of the conference and have been interested to see the current directions that the hardware and software manufaturers are heading in. The term cognitive computing appears to be here to stay...

I thought I'd finish where most who attend seem to start, with a (slightly blurry) ubiquitous shot of the SC'16 logo on the Salt Palace Convention Center's main tower:

 

SuperComputing 2016 Day 3 (Wednesday)

Over half way through, these are some day 3 highlights! 

I spent much of today rushing between technical presentations as well as looking at the many, many posters dotted around here there and everywhere. I also took the time to really get stuck into the exhibitor hall.

While looking around I came across the Japan Aerospace Exploration Agency's stand, they had lots of nice CFD work displayed but what really caught my eye was the utterly cool (geeky I know) little 3D printed display models of CFD results they had. What do I mean by this? Well, there is an example in the image below but effectively they have produced tiny little versions of their best CAD work (such as flow past a rocket), including showing the flow data as streamlines and colours as you would in a visualisation package. I've never seen anything like it and thought it was really cool to see virtual results turned back into reality (albeit tiny tiny reality).

I also visited the finalist session this morning for the annual visualisation competition held at SuperComputing, these were the final 4 of about 20, with the winner being announced tommorrow following their presentations.

The one that really stood out for me was a collaboration between NASA and a number of universities based in the USA, looking at the likelihood of an asteroid impact generating a Tsunami of note. Clearly there was much science behind getting the physics right (done, oddly using an in-house Eulerian code that employs mesh refinement to capture the free surface... surely a Lagrangian approach might have been better?) but the visualiations were key to finding useful informaton from the data itself in this case, especially when it came to understanding the shock waves generated in the atmosphere. There's an image below of one of the slides from their presentation but the top and bottom of it is, if an asteroid bigger than 250m in diameter hits the ocean a few miles out from a populated area... there could be trouble!

SuperComputing 2016 Day 2 (Tuesday)

So here's day 2! Today was the start of the technical sessions rather than just the vendor exhibitions, there are a huge number of streams all running concurrently, so unfortunately as there is only one of me I had to pick and choose carefully where to be (no point in selecting something that is a 15 minute walk away if I only have 5 minutes between things...).

The day started with an interesting keynote from Katharine Frase, formerly of IBM research, talking about how cognitive computing can be used to accelerate our lives. Her talk was understandably thin on technicalities but gave a good overview of how cognitive computing approaches might be used to help some of the most important aspects of our lives such as healthcare or education. The stage she presented on was really impressive too! 

The rest of the day was spent either manning the STFC exhibition or at technical sessions. Three of the most ineresting were looking at various aspects of how to accelerate Molecular Dynamics simulations. Mostly this involved modifying LAMMPS but the over riding feedback is that memory structure is first and foremost key (which LAMMPS already does well) followed by good use of modern SIMD programming practices to enhance vectorisation.

All told a very interesting day, more of the same tomorrow!

SuperComputing 2016 Day 1 (Monday)

Well, as promised here is the first of my mini blogs from the SuperComputing conference in Salt Lake City, Utah in the USA.

I am attending the conference both as an interested member of the MicroNanoFlows group and also in an official capacity as a representative of STFC. Today has mainly been about the exhibitors getting their stands ready, there is an eclectic mix ranging from huge corporate IT companies like IBM, NVIDIA and Dell through to European science institutions like Barcelona Supercomputing Centre, as well as of course, STFC! 

To say that I can't quite believe the scale of this conference is an understatement, for many companies this is a key event in the year and they spare no expense. There are something like 5000 individual exhibits in a 55,000 square foot space... in the last 24 hours all of the booths have been set up (some are incredibly elaborate) and the whole place has been carpeted, the big companies even brought their own really soft underlay! 

The technical program kicks of tomorrow with a good few HPC oriented Molecular Dynamics talks, so I'll be reporting back with more details about that. In the meantime however, here's two shots of the exhibition hall being set up this morning that I took from STFC's stand location, what you can see in these is about half the total space! 

Review of the 1st Workshop on Advances in CFD and MD modelling of Interface Dynamics in Capillary Two-Phase Flows

It has been over a month since I began studying towards a Ph.D. at the School of Engineering, University of Warwick. However, I did not yet have a possibility to share an experience that occurred to me at the beginning of my Ph.D. During the first week of my Ph.D., as part of the training provided by the department, together with my colleague Mr. Chengxi Zhao, I had an opportunity to attend a workshop on advances in computational modelling of interface dynamics in capillary two-phase flows: http://ltcm.epfl.ch/cms/page-127462.html.

The workshop was held at École Polytechnique Fédérale de Lausanne in Switzerland between 03.10.16 and 07.10.16. It was aimed at the Ph.D. students and early career researchers, who are involved in experimental and numerical investigation of two-phase flows. The lectures and practical sessions that were associated with the workshop provided an introduction to several active areas of research in computational modelling of two-phase flows, for example, interface capturing methods and arbitrary Lagrangian-Eulerian finite element method.

Overall, the workshop was a very pleasant and insightful educational experience.

EPFL, Lausanne, Switzerland, October 2016 (courtesy of Chengxi Zhao)

EPFL, Lausanne, Switzerland, October 2016 (courtesy of Chengxi Zhao)

Lausanne, Switzerland, October 2016 (courtesy of Chengxi Zhao)

 

SuperComputing 2016!

For those that haven't heard of the HPC conference known as SuperComputing (http://sc16.supercomputing.org/), the 2016 installment is once again beind held in Salt Lake City, Utah in the USA between the 13th and the 18th of November. it is undoubtedly one of the biggest high performance computing conferences in the world and while lots of the content is as you would expect (our chip is bigger and faster than our chip last year and of course its always bigger and faster than their chip), there are also many people who use SuperComputing to show off their latest and greatest applications.

The 2016 conference actually has a large number of Molecular Dynamics presentations, therefore as the resident MicroNanoFlows computer geek I've been nominated to go and have a look!

I'll be there all week and during that time will be micro-blogging on what I have seen, you can be guranteed that much of it will be cutting edge or at the very least an exciting new announcement. I'll look forward to reporting on exciting new work like "Increasing Molecular Dynamics Simulation Rates with an 8-Fold Increase in Electrical Power Efficiency", "Enhanced MPSM3 for Applications to Quantum Biological Simulations" and "Modeling Dilute Solutions Using First-Principles Molecular Dynamics: Computing More than a Million Atoms with Over a Million Cores", to name but a few.

My next posts will be from SC'16!

Funding for Early Career Researchers

One day, the snug protective blanket of the Micro & Nano Flows group will be removed and you will have to find a job.  At this point you may look at your CV and panic that it is rather sparse (let's be honest, no one cares too much about your grade 3 on flute or your orange belt in karate).  How to avoid this scenario?

Despite the well-funded nature of our group, it is worth considering applying for 'small pots of money' that may (a) be useful to fund additional activities and (b) would demonstrate to any potential employer that you are highly motivated and can win grants.  For example, this could fund:

  • travel - e.g. to setup a new collaboration by making an extended trip abroad
  • workshop organisation - e.g. to engage with people outside your main discipline
  • computing time - to run simulations on HPC facilities

Typically such grants would be for a few thousand pounds and the application procedure would be relatively straightforward. 

What I thought would be useful is to setup a section Funding for Early Career Researchers under the Opportunities tab, where people can contribute with their experience of winning small grants (i.e. those which PhD/PDRAs can apply for) - hope it helps.


 

What are antibubbles?

 

Checkout this interesting video on the newly observed "antibubbles". A 'normal' bubble is a thin film spherical shell of liquid encapsulating a gas sphere, while an antibubble is a thin film spherical shell of gas, encapsulating a liquid. 

 

Warwick Fluids Group hiking trip

Last weekend, in the name of team bonding, the Warwick contingent of our group joined the rest of the Warwick Fluid Dynamics Research Centre for our annual hiking trip, providing a good opportunity for some of our newer members to get to know the other staff in the department. The destination was Llanberis in North Wales, with the trail being a disused slate mine, which is still home to a lot of the old mining equipment along with a series of caves and tunnels through the mountain. However, before any walking could take place, tradition had to be adhered to with no less than 6 people going for a swim in what I'm assured was "actually quite warm" water in a nearby lake (pictured). At 10pm. In October. In Wales. Below are some pictures of the hike.

At the end of the at-times-treacherous hike, we all had survived and some of the more hipster people in our group even managed to return home with some rusted junk they could convert into a household item, so all in all a successful trip.

CaKe - Cut cell algorithm for Kinetic equations

Check out CaKe - a solver for the simulation of rarefied gas flows governed by the BGK equation. It solves unsteady problems with moving boundaries, and its solutions compare well to those that our DSMC solver provides.

High latency alternative to SSH

If you need to use SSH when on an intermitent connection then check out MOSH. It's much more reliable when on those connections that drop in and out. You can also roam using MOSH, meaning that your session will persist across different connections. It kept me sane while I was on holiday.

The 5th MNF2016 conference in Milan

The 5th MNF2016 conference is being held at the Politecnico di Milano (POLIMI) in the wonderful city of Milan. The university was established in 1863 so is over 150 years old and has a long and impressive history. It is also ranked #1 in Italy and highly ranked internationally so a very fitting place for the meeting. Early registration took place on Sunday night in the Aula Magna, a very grand and impressive building.

The start of the conference was Monday but attendees trying to get to the main auditorium were confronted by the main entrance being cordoned off. Why? Well it was closed because Milan were hosting a round of the Italian Master Chef series. Hopes were high for a grand lunchtime meal but these were quickly dashed by the organisers! Clearly an opportunity missed by Master Chef with all of those discerning international palates waiting...

Today was quite busy for the Micro and Nano Flows for Engineering group - we were chairing one of the opening sessions with Dave Stephenson speaking in the afternoon. Tuesday will be equally busy with talks by David Emerson in the morning and Srinivasa Ramisetti in the afternoon. This is complemented by Dr Lei Wu, a colleague from Strathclyde also talking in the morning session.

rsync for data backup and synchronization

The rsync utility is a handy and powerful tool that can be used to backup and synchronize files/directories between different locations in an effective way. Although there are other file transfer protocols (like scp) that can carry out similar functions, the main advantage of rsync is that it is comparatively faster and consumes less bandwidth. This is especially true if one needs to transfer large amount of files or very large size files.

rsync is also  great way to restart an interrupted or failed data transfer with very little cost, as it can pick up part way through a large file rather than start from scratch. It can also act as a great synchronization tool, if folders need to be kept in sync at different locations. When used with the -a flag which stands for 'archive', it can preserve most features including time stamps, permissions, and symbolic links.

A typical example of using rsync for copying files from a remote to local system is given below

              rsync --progress -avz user@remote.server:/folder  /local/folder

Besides these, there are several other options like -delete, -exclude and -max-size that can be used in conjunction with rsync to make it tailored to user requirement.

Droplets used in direct printing

Here I introduce two papers on direct printing involving droplets.

The first one published in Nature Communications (http://www.nature.com/articles/ncomms1891) reported direct printing of nanostructures by a method involving nanoscale electrohydrodynamic inkjet printing technology. A combination of nanoscopic placement precision, soft-landing fluid dynamics, rapid solvent vapourization, and subsequent self-assembly of the ink colloidal content leads to the formation of nanostructures with base diameters equal to that of a single ejected droplet.

The second one published in Science (http://science.sciencemag.org/content/340/6128/48.full) reported how to build liquid objects by 3-D printers. To create liquid scaffolds, the researchers in University of Oxford custom-built a printer to squirt tiny lipid-coated water droplets from its nozzles and print them onto a platform submerged in an oil bath. Because of their lipid coatings, the tiny droplets formed a very thin bilayer interface instead of fusing to form a larger droplet. The researchers have been able to produce 3-D patterned networks of tens of thousands of connected droplets.

Mounting remote file systems using SSHFS

Use SSHFS to conveniently mount a remote file system on your local machine, all over ssh. You'll be able to perform any operation on the mounted files as if they were stored locally.

Once you've installed SSHFS (available on the Ubuntu repositories or through osxfuse on OS X) create a mount point (I've created a directory called tinis for this purpose):

mkdir -p ~/mnt/tinis

Now go ahead and mount your remote file system:

sshfs -o allow_other username@xxx.xxx.xxx.xxx:/ ~/mnt/tinis

The allow_other option allows non-rooted users have read/write access. You'll now find all of your remote files in ~/mnt/tinis. Copy files to ~/mnt/tinis and they'll be uploaded in the background. Once you're done, unmount using:

umount ~/mnt/tinis

Your large simulations will be easier to manage (bonus: try monitoring ongoing simulations locally with gnuplot).

Micro & Nano Flows Group Industrial Partner Meeting in Warwick

On the 30th of June, the Micro & Nano Flows group held their first meeting with industrial partners at the Arden conference centre in Warwick. This SIC review meeting marked the first of many to come as part of the groups recent 5 year £3.4m EPSRC grant (EP/N016602/1) and was designed to allow the areas of industry that had pledged interest in the grant before it started to begin the process of defining their problems and working out collaborations to be formed with the MNF group.

The day saw presentations from many members of the MNF group over three sessions, presenting where we are now in terms of our work on micro & nano flows for engineering applications, where we are going scientifically and the current state of our software and tools as well as our vision for the future such as how we are going to couple our groups efforts together in a homogeneous way.

People from many areas of industry attended, with representation from AkzoNobel; The European Space Agency (ESA); Jaguar Land Rover and Nokia Bell to name a few, each with their own problems at the micro & nano scale.

The idea behind these meetings is that they are held fairly regularly over each year of the grant, this first meeting saw quite a bit of information presented by the MNF group but it still involved a good amount of lively debate from the partners present and really set the tone for an exciting and dynamic set of research partnerships as the next 5 years pan out. The meetings that will follow will take on more of a workshop feel, with those from industry encouraged to bring their biggest research challenges with them so we can start to think around the problems as a group and come up with tangible research goals to solve them.

The event was thoroughly enjoyable (helped along by the meeting finishing with a superb conference dinner served by the Arden conference facility that begun with a tall glass of a certain fruity alcoholic beverage* best enjoyed with tennis while sat on a lawn in the sun), setting a positive and productive tone for the meetings to come.

* Answers as to the name of said drink on a postcard please!

Universal Coupling (plus the EMiT 2016 EMergIng Technology conference)

Much of the research undertaken within the Micro & nano flows group relies on software to compute new scientific results. There are plenty of examples of this within the history of this blog but one area of increasing importance is that of coupling codes together to solve multi-scale or multi-physics problems with more than one piece of software.

I have talked about coupling in the past but this time I wanted to briefly describe the concept of universal coupling. This idea is gaining traction at the moment within reearch communities around the world as well as major softare vendors, in a nutshell it is the idea of providing a universal interaction layer or glue that can stick together any type of software that solves a scientific problem to make up a larger whole to solve more complex problems than any of the individual components can solve on their own. 

In the past I mentioned a number of software frameworks for solving multi-scale/multi-physics problems, one example was the MUSCLE2 library (link) which came from the European MAPPER project, the same consortia are also behind the H2020 funded COMPAT project. The intereseting thing about large solutions like these though is that their use and integration is inherently difficult because of the scale of what they are trying to achieve. 

A number of solutions have become apparant that aim to solve the problem of universal coupling in a less intrusive way, in the past I mentioned EDFs PLE wrapper that comes as part of their Code_Saturne CFD software, this uses the concept of data transport at a set of points to allow transfer of data between solutions. The basic premise is that, regardless of the form of a solver (i.e. whether it is mesh based or not or whether it is a continuum solver or not) data can always be sampled at specific points and sampled data can be imparted on another solution from said points. From a software engineering perspective the challenge is not too great, of course like anything, to do it well is always a big challenge, but precedent for methods to achieve this sort of communication framework are well established. The key challange is to ensure loss of simulation fidelity at the point of coupling is either addressed or at least managed. 

Primarily the key questions are:

1) How do I sample my solution at a specific point while maintaining the level of accuracy I desire/need (i.e. is it OK to perform a linear interpolation of surrounding cells/other discrete entities or is soemthing else required?)

2) How do I consume information stored in data set at a specific point within my solution (i.e. I know that an external force exists in my simulation domain at point x,y,z because a coupled simulation has told me so, but I have no exact discrete location within my solution that matches this point, is it OK to interpolate a new value from the coupled data and if so, using what method and if not, how do I ovecome this?)

Generic solutions like PLE take the stance that they provide the coupling mechanism but it is up to the individual software developers using it to define how data is imparted and consumed from the points. 

A new solution has recently begun take-up within our group that starts to bridge the likes of MUSCLE2 and PLE by working in the simplistic manner of PLE but by being designed to allow developers to easily add their own data storage/impartment methods so the library can grow into a useful code base for many different method types. Originally developed within the Applied Mathematics division at Brown University in the USA, it is called the Multiscale Universal Interface (MUI) and is available to download from GitHub. In some ways, what MUI offers is fairly obvious when you take a step back, however its key strength is that it has been designed in a well-engineered manner to be both extensible and as light-weight as possible in that it is a header only C++ library (that currently provides wrappers for C and Fortran as well). It makes use of MPI for its communications but does so in a way that won't interfere with existing MPI comms, so multiple MPI applications can use MUI to interact.

The library is currently being tested within our group and should it prove a good way forward we will aim to collaboratively expand its current capabilities with the original research team at Brown, with the results making their way back into the software's repository.

In other news, a snippet of the Micro & Nano flows grouop work was recently on display at the 2016 emerging technology (or EMiT) conference at the Barcelona Supercomputing Centre in Spain. This conference looks to provide a platform for those using or developing the latest emerging trends in computing, be that software or hardware. We showed off some of the groups cutting edge work on the IMM method (coupling MD and DSMC) as well as some GPU porting work for our MD code.

List of research articles reporting slip lengths

Recenlty I was reading a lot of research articles looking for information on slip lengths, for flow over hydrophobic/hydrophilic surfaces, reported from various experiments and numerical simulations in order to compare our slip length calculations extracted from simulations. There is a huge number of journal articles and reviews on slip length measurements from experiments and simulations. Below is a list of links to articles wherein slip lengths reported in different papers are collected and presented in a nice tabular fashion that is easy for one to refer.

1. http://pubs.acs.org/doi/full/10.1021/la5021143

2. http://scitation.aip.org/content/aip/journal/jcp/138/9/10.1063/1.4793396

3. http://link.springer.com/referenceworkentry/10.1007%2F978-3-540-30299-5_19

Workshop on molecular modelling of interfacial dynamics

On Thursday 6th May, researchers from the Universities of Warwick and Edinburgh were joined by Prof. Terry Blake (Emeritus Professor at the University of Mons) for a workshop on the "Molecular Modelling of Interfacial Dynamics". The workshop, hosted by Dr James Sprittles at the University of Warwick, provided a platform for discussing the prominent challenges involved in the simulation of micro-droplets and surrounding small-scale phenomena. Understanding the governing physics at fluid interfaces on the molecular level underpins a number of emerging technologies. For example, controlling the break-up and coalescence of droplets is crucial to the operation of 3D printers; understanding the wetting characteristics of droplets is important for producing uniform coating films which prohibit air entrainment; regulating the evaporation of solute-containing liquid droplets can reduce the damage to surfaces due to weathering; and understanding the fluid-gas interface near micro-structures such as carbon nanotubes is key for developing drag-reducing surfaces.

There were seven presentations in total, covering a range of topics from the fundamental physics of droplet break-up, impact, and wetting, to the use of multiscale methods and machine learning techniques to enhance molecular modelling capabilities. See below for the complete list of presentations, some of which were exquisitely captured by our official photographer (also James).

- A multiscale method for non-equilibrium gas flow simulation in high-aspect-ratio geometries (Duncan Lockerby)
- Accelerating a multiscale continuum-particle fluid dynamics model with on-the-fly machine learning (Dave Stephenson)

- Water flow through and over carbon nanotubes: applications to drag-reducing surfaces and water filtration membranes (Matthew Borg)
- Droplets evaporation and spreading: Molecular dynamics and sequential hybrid simulation (Jun Zhang)

- Dynamic wetting, forced wetting and hydrodynamic assist (Terry Blake)
- The dynamics of rarefied gases between colliding bodies (Alex Patronis)

- How liquid drops form  (James Sprittles)

Also, there were biscuits. This is important.

Fourth OpenFOAM UK&I User Meeting

The fourth OpenFOAM UK&I user meeting was held at Exeter university last week on the 18th and 19th of April. The next event is expected to be held at Warwick university but details are still being confirmed.

I have reported these events on this blog before but for anybody either using, or interested in using OpenFOAM for their research they are a valuable (and yet free) event well worth making the effort to go to. The intention is that this event travels all over the UK & Ireland (as the name suggests!), it has now been as far south as Exeter and as far north as Warrington within England, but now needs to spread its wings further and make it to Scotland, Wales or Ireland. To get involved in organising one of these events please contact either the last organising committee (http://ukri-openfoam.ex.ac.uk/) or the authors of the foam-extend project (http://www.extend-project.de/), or come along to the next one and volunteer!

The first day saw a number of training events around OpenFOAM using the computing facilities at Exeter university while the second day saw a number of invited talks from the OpenFOAM community, showing a diverse set of works ranging from FSI with OpenFOAM through to the use of OpenFOAM to design 3D printed heat exchanges and its use in refining the design of ovens used to cook pastry based products!

There was lively discussion throughout the day, culminating in a discussion about the current HPC performance of OpenFOAM and whether a) it needs to be improved, b) what needs to be improved and c) how do we do it as a community? The general concencus was that performance was below par and many reduce the scope of their problems to fit OpenFOAM rather than the other way around, this can't be the right way! Performance is clearly something that the community needs to tackle and so a number from the meeting went away with making this a priority within their organisations... so watch this space!

The Breakup of Liquid Volumes (how drops are formed)

 

The breakup of liquid volumes is a routinely observed phenomenon - e.g. when a jet of water emanating from a tap breaks up into droplets.  Recently, this process has attracted significant attention due to its importance for the functioning of a range of microfluidic technologies where one would like to be able to control the generation of uniform sized droplets which then become building blocks, as in 3D printers, or modes of transport for reagents, as in lab-on-a-chip devices. 

The challenge of capturing breakup, experimentally and computationally, is the strongly multiscale nature of this phenomenon.  Whilst the global dynamics may be on the scale of millimetres, as the breakup proceeds ever smaller length and time scales are encountered. Within the realm of continuum mechanics, which has been shown to retain accuracy right down to the nanoscale (Burton et al 04), one may have to capture 5-6 orders of magnitude in space and time.  This is completely intractable for CFD software.

Ideally, one could use computations down to a certain resolution and then rely on similarity solutions (relatively simple expressions which are accurate close to breakup) to finish the job.  However, to do so one  must establish the limits of applicability of these solutions, see Eggers & Villermaux 08, and this is what our recent article Capillary Breakup of a Liquid Bridge: Identifying Regimes & Transitions (accepted for publication in the Journal of Fluid Mechanics) has established for the first time.

To do so, we have developed a finite element code which allows us to simultaneously resolve 4-5 orders of magnitude in space in order to capture the final stages of breakup.  This has been achieved in the liquid bridge geometry shown above.

The breakup event is, in the simplest case, characterised by an Ohnesorge number (a kind of dimensionless viscosity) whose value can have a dramatic effect on the dynamics.  For example, at intermediate Oh (video on left) the breakup occurs at two points so that a small drop is formed in the middle (a so-called 'satellite drop', which technologically is usually bad news) whereas at higher Oh (right) the breakup occurs in the centre.  Mapping our results across all Oh, tracking the minimum radius of breakup, we have been able to establish where similarity solutions are accurate and, as a by-product, have discovered a number of interesting previously undiscovered features of the breakup which deserve futher attention.

 

 

 

 

 

 

An obvious question to ask is what happens to the breakup when the thread reaches molecular scales?  The simple answer is, we don't know! Despite some progress in Moseler & Landman 00, there has been relatively little achieved in this direction.  Similarly, the 'reverse' process of coalescence has only recently received any attention  (e.g. in Pothier & Lewis 12).  This is somewhat surprising, as the related problem of dynamic wetting has received a huge amount of attention from the molecular dynamics community (e.g. DeConinck & Blake 08).  Consequently, a clear opportunity exists for us to exploit molecular simulation techniques to better understand breakup/coalescence phenomena at the nanoscale.

Hydrodynamic Wall

The "hydrodynamic position" of the wall `Z_H` is defined as the position at which the boundary condition on the macroscopic velocity field has to be applied.

`(\frac{\partial v_{x,y}(\mathbf{r},t)}{\partial z})_{z=Z_H}=\frac{1}{L_s}v_{x,y}(\mathbf{r},t)|_{z=Z_H}`

where `v_{x,y}` is the slip velocity, `L_s` is the slip length. The hydrodynamic position `Z_H` fluctuates as it is defined by the molecular trajectories as a result of collision with the wall.

Fluid in the dense state undergo frequent collisions with the wall as compared to a rarefied state. As a result the fluctuating "hydrodynamic wall" observed in a small time interval is density dependent for a given fluid-wall interaction.

Click on the images to view how hydrodynamic wall looks for rarefied and dense cases.

    Rarefied case

  • Dense case

Software Coupling: Can software make friends?

A quick note to begin with, the next OpenFOAM UK & Ireland user meeting will be held on the 18th and 19th of April 2016 at the University of Exeter. This is a free event but requires registration, so if you are interested then please head on over to http://ukri-openfoam.ex.ac.uk/ where you can register. This will be the fourth time this event has run, with the third being held at Daresbury Laboratory in Warrington that saw nearly 100 people come together.

All told these events are proving more and more useful, so anybody either using or developing OpenFOAM (or interested in either) should come along!

On to the topic of this post. Coupling. Of course I don't mean speed-dating or anything like that, I mean the act of getting two or more pieces of software to work harmoniously on a single problem. This is not a new concept, indeed it has probably been around as long as scientific software has existed. In fact as a concept it is even deeply embedded into fundamental computer science things like Object Orientation. However, it is something that is receiving more and more attention as of late, perhaps due to an ever increasing capacity for computation becoming available and a realisation that creating a "monolithic" solution to solve all physics problems is simply unrealistic. However, one issue at the moment is a lack of standardisation.

There are lots of projects out there, some huge, some very small, some aiming at generic solutions, some offering very specific capabilities to make one piece of software talk to one other. One common trait though is that terminologies are still often interchangeable and the best approach for most situations has yet to be defined.

I don't want to harp on too much in the space of a single blog, however I thought it might be nice to at least define what this author thinks is meant by some of the most common terms used alongside "coupling", it would be interesting to see if there is agreement or not as to their general applicability:

  1. Loose coupling: A system comprised of multiple discrete components where each may able to transfer data to and/or from another but none are reliant on any other for their continued operation.
  2. Tight coupling: As per loose-coupling but where at least one component in the system is reliant on another to operate.
  3. Monolithic coupling: A system in which two or more discrete components are combined to the point where they can no longer be distinguished from each other, effectively becoming a single entity.

In my experience, when people say "code coupling" they are most commonly speaking about the second type, however it would be nice for these phrases to become formalised so no explanation is needed (some may argue that has already happened, however a recent review excercise undertaken by the author has lead him to believe that not to be true).

How about if you have your own piece of code and want to make it work with something else, but you don't know where to start. You could create your own interface, however it may be more sensible to see if something that already exists fits your bill already. As a starting point, here are a few notable projects:

  1. MpCCI (Fraunhofer SCAI; commercial; http://www.mpcci.de/mpcci-software.html): This commercial solution can be found embedded into many existing and well-known software packages and even has an interface for OpenFOAM. It provides a library to connect into new software, however it should be noted that for it to work, you will need to purchase a license to operate the server which acts to coordinate all coupled applications.
  2. MUSCLE2 (MAPPER Project; Open-source; http://www.mapper-project.eu/web/guest/muscle): A result of the European funded MAPPER project, the MUSCLE2 library and suite is an in-depth solution to code coupling and management. It is perhaps a little too heavy-weight for many situations, but for those serious about coupling, it's well worth a look. The MAPPER project has now evolved into the new EU COMPAT project, with lots of member institutions (http://www.compat-project.eu/), which is focussing more specifically on the multi-scale aspect of coupling.
  3. CouPE (ANL; Open-source; http://sigma.mcs.anl.gov/coupe-coupled-physics-environment/): Part of ANL's SIGMA tool-kit, this takes a similar approach to MUSCLE2 and suffers the same problems in terms of the learning curve and overheads involved with framework set up. However, a very complete solution with lots of good publications associated.

There are many other examples, so it is always worth having a look to see what exists before assuming you need to reinvent the wheel!

Code coupling is a hot topic but one that now needs its respective communities to begin to come together. There are too many competing efforts out there for something that, really, can only be done in a few ways for nearly all cases. Ultimately it is reasonable to say that the general need, when scientific code coupling is considered, can be boiled down to "one piece of software needing to communicate with another and impart information about its physical state at a specified point, at a known point in time."

OpenFOAM at Warwick

Wanted to make everyone aware of the OpenFOAM user group we've set up at Warwick. We'll be posting code snippets and useful scripts to our repository. We'll also be hosting some OpenFOAM coding days in July, and will try to run both beginner and advanced sessions. Let us know if you'd like to attend or have any topics to propose as objectives.

Pancake bouncing droplets on micropatterned surfaces

In this video you can see an experiment of water-filled balloons dropped on a bed of nails to explain the unusual 'pancake' formation of droplets after impact on superhydrophobic micropatterned surfaces.

Browser based visualization of molecules

Browser based visualization of molecules

Web browsers have come a long way since the first Nexus browser thanks to WebGL. WebGL applications separates the step of Polygon generation and Actual rendering as polygon generation is done only once by CPU while rendering is done by the dedicated graphics processors. GLmol is one such tool (released under LGPL3) to visualize molecules using Three.js, a cross-browser JavaScript library/API. One can embed multiple instances of GLmol in a page and use javascript to customize molecular representation. Thus visualization of molecules takes advantage of fast in-browser 3D graphics capabilities available through WebGL without any help from third-party proprietary browser plugins. To know more visit GLmol - Molecular Viewer on WebGL/JavaScript.

Molecular dynamics simulation of water flow over a CNT surface with trapped air

Grant Success - Skating on Thin Nanofilms: How Liquid Drops Impact Solids

Duncan & James have won a Research Project Grant from the Leverhulme Trust for their proposal 'Skating on Thin Nanofilms: How Liquid Drops Impact Solids', which will fund a 3 year postdoctoral position in the Mathematics Institute at Warwick - see advert: http://www.jobs.ac.uk/job/AUB367/research-fellow-in-interfacial-flows-77087-125/ .

The project will draw together theories from the previously disparate and segregated fields of capillary flows and non-equilibrium gas dynamics in order to open up an entirely new direction of research.  The initial focus will be on the impact of liquid drops on solids, routinely observed when a tap drips noisily; raindrops pound the windscreen; or a coffee drop spills on a crucial document! However, is there more to impacting liquid drops than these everyday annoyances?

Actually, much more. Beneath a mundane façade these flows hide a still-not-fully-understood competition of complex and diverse physical mechanisms that determine macroscopic dynamics. They are also integral to numerous technological, environmental and biological applications of fluids, e.g. acting as the building blocks of 3D printed objects or as carriers of pesticides in crop spraying.

Despite substantial scientific interest in drops impacting solids, no existing theories are able to explain a raft of experimental observations. Specifically, compelling evidence, revealed in Xu et al 2005, indicates that the gas surrounding the liquid drop (a) has a critical role in determining the dynamics of the drop during impact and (b) can suppress drop splashing when its pressure is reduced. This is shown in the experimental videos below:  drops splashing at atmospheric pressure (left hand video) can be suppressed at reduced pressures (right).  Whilst (a) surprised many in the field, due to the negligible magnitude of the gas-to-liquid density ratio and viscosity ratio, it was (b) which was completely unexpected - and is in direct conflict with all existing theories.

 

 

 

 

 

 

The discoveries in Xu et al 05 stimulated an explosion of experimental work aimed at revealing the curious details of the gas’s role. New experimental methods, based on interferometry (Driscoll & Nagel 2011) and x-ray imaging (San Lee et al 2012), were developed to capture the spatio-temporal evolution of the gas trapped between the drop and the solid surface. Most remarkably, results obtained within the last year (Kolinski et al 2014; de Ruiter et al 2015), demonstrate that under particular conditions, drops impacting smooth surfaces can skate indefinitely on a gas nanofilm (of height h~10nm) and subsequently rebound; even from wettable surfaces!

These experimental analyses have revealed the incredible multiscale nature of a seemingly innocuous problem: mm-sized liquid drops are controlled by the dynamics of microscopic gas nanofilms that are 10,000 times smaller!  There are now a myriad of open questions, the most striking being: what is the mechanism by which gas nanofilms affect splashing?

Purely experimental approaches have been unable to unequivocally establish what mechanisms cause splashing so that theoretical approaches are now required to provide a deeper insight into the underlying physics. Advances in our theoretical understanding have, however, not matched progress in experimental analysis.  Models based on a Navier-Stokes (NS) description of the gas and liquid predict that drops will always skate indefinitely over gas films so that liquid-solid contact never occurs; a result in conflict with experiments showing that skating only occurs under particular conditions, with well-defined transitions to contact-induced wetting.

A major shortcoming of classical modelling in this context is its inability to describe evolving gas films of thickness h comparable to the mean free path in the gas l (70nm for air at atmospheric pressure), so that the Knudsen number Kn=l/h is not small and the NS-formulation becomes inaccurate. To properly understand these flows an entirely new modelling framework is required for the gas phase using models based on the Boltzmann equation (BE) of kinetic theory. It is the aim of this project to provide a framework for this new class of free-surface BE-NS flows.

Our main focus in this project will be on the impact of drops on solids; however, estimates of gas film dimensions in numerous other flow configurations suggest violations of the NS-formulation are far more common than expected. These situations include the collisions of liquid drops, where bouncing-coalescing transitions depend on the ambient gas pressure (Qian & Law 1997); the formation of tip-singularities in free-surfaces (du Pont & Eggers 2006); the stability of nanobubbles on solids (Seddon et al 2011); the pinch-off of gas bubbles (Dollet et al 2008); the impact of projectiles on liquid surfaces (Truscott et al 2014); the initial stages of drop coalescence (Sprittles & Shikhmurzaev 2014); the creation of anti-bubbles from cylindrical air films (Beilharz et al 2015); and air entrainment in coating flows (Sprittles 2015).  Our new BE-NS framework will also apply to these flows. 

Spontaneous droplet trampolining on superhydrophobic surface

A recent interesting paper published in Nature reported spontaneous droplet trampolining on rigid superhydrophobic surface. The details can be read from the link below:

http://www.nature.com/nature/journal/v527/n7576/abs/nature15738.html

More news about the impact of this research are as follows:

http://www.sciencedaily.com/releases/2015/11/151104133224.htm

http://www.gizmodo.com.au/2015/11/these-trampolining-water-droplets-seem-to-defy-physics/

http://www.rsc.org/chemistryworld/2015/11/superhydrophobic-surface-droplet-bounce

OpenFOAM User Meetings

The third UK & Ireland OpenFOAM user meeting was held at Daresbury laboratory last week between the 2nd and the 4th of November. The event was co-hosted by the the Engineering and Environment group and the Hartree Centre from the Science & Technology Facilities Council. Prior events have been held at The Centre for Modelling & Simulation in Bristol and The University of Leeds. This initiative was spearheaded by Prof. Hrovje Jasak from the University of Zagreb who is also a principle author of the foam-extend project. It is his intention, and now the intention of the past organisers of this event that it should be a regular (at least bi-yearly) event held at institutions around the UK to allow for an informal gathering of OpenFOAM's user and develepor base to help stimulate a cohesive community around the software. Future events will likely be advertised first via the CFD-Online community, so keep your eyes peeled for where it goes next!

The event saw around 50 OpenFOAM users from various backgrounds come together for the first 2 days and explore the use of OpenFOAM on some of STFC's large computing resources while the third day saw nearly 100 users from both industry and academia share their work and engage with each other on the future direction of OpenFOAM and the community built around it.

The event also saw Dr Ajit K. Mahendra present work on the MicroNanoFlows group Molecular Dynamics solver.

One key element of these events is the ability for the group to hold forums and discussion around current software problems, be it their own personal simulation issues or more general problems with either the code or organisational structure around the code. This event was no different with a chaired hour long discussion rounding up the day proving insightful. The key points that were taken away were:

  • Currently the OpenFOAM community is a little too fragmented (partly due to the existence of both the standard OpenFOAM and foam-extend projects).
  • Many people don't know how best to feed back developments or bug reports etc. and more importantly who to feed back to. (Prof. Jasak from foam-extend was present and argued that he was one optin for this at the moment via the foam-extend project).
  • OpenFOAM is not currently in heavy use across all of industry, especially at the "higher" end (i.e. large-scale users), potentially due to a general lack of copability on behalf of the OpenFOAM community. This is a general challenge for open-source software in general but an interesting one nonetheless.
  • There was concensus that OpenFOAM needed more structured testing to improve general confidence in its ability compared to other commercial solvers. This is something currently being tackled by the foam-extend project, which is supplied with an extensive "test harness" but they are looking for as many demanding test cases as possible to be supplied that can be incorporated into the harness.
  • There were questions as to why there was limited uptake of OpenFOAM as a teaching tool within universities and schools, where (admittedly heaviily discounted) commercial solutions are often preferred by tutors. There were a number of reasons cited, but primarily this boiled down to ease of access to the software (not all students know how to compile from source and not all are comfortable with Linux with many resources offering Windows only) and lack of teaching resources like books. It was pointed out however that the latest version of foam-extend, 3.2, can now be natively compiled under Windows and Mac using MinGW and that a number of good books have started to surface recently, for example The OpenFOAM Technology Primer.
  • Finally, everybody agreed that OpenFOAM was a fine example of an open-source software initiative and a very impressive achievement that could only get better as long as the community grows and harmonises via events like the user workshop.

Using the IMM to study time-dependent thermal transpiration

This demonstrates that the IMM can be used to accurately study time-dependent thermal transpiration (similar to what has been presented in our 2015 paper). The data of Rojas-Cárdenas et al. has been used to validate the IMM. Their experimental setup consists of a borosilicate channel of circular cross section connecting two reservoirs; one of which is heated and the other reservoir is held at ambient temperature (subscripts c and h denote the unheated and heated reservoirs, respectively). The pressure of the gas in the reservoirs is measured using capacitance diaphragm pressure gauges. The multiscale representation of this configuration consists of three coupled models: the reservoir model (macro, k = 3); the channel model (meso, k = 2); and the gas-kinetic model (micro, k = 1):

Note, the valve and pressure gauges shown in the above are part of the experimental setup only. This system is used to study thermal transpiration as a time-dependent phenomenon; from an initial stationary state where only a thermal-transpiration flow is allowed to develop (pc ≈ ph ≈ const.), to a final stationary state where the net mass flow rate in the channel was zero. This final state is reached by allowing pc and ph to evolve (by closing the valve), resulting in the generation of a pressure-driven flow which opposes the thermally-driven flow. A full molecular treatment is computationally intractable, reinforcing the need for the IMM.

The IMM manages to capture the transient response of the pressure in each reservoir after the system is instantaneously closed; as shown in our 2015 paper there is excellent agreement between the experimental measurements and the multiscale solutions. Currently, the IMM running on a hexa core CPU requires a little under an hour to provide these solutions - we have an efficient and accurate method to study thermal transpiration and inform the design of devices that make use of this phenomenon, e.g. Knudsen compressors.

Creating OpenFOAM meshes with Gmsh

Following on from Alex's post, I'd like to demonstrate how Gmsh can be used to create a mesh for OpenFOAM, using a 2D bifurcating network as a simple example. Meshes can be created interactively using a GUI or by writing a .geo file using Gmsh's own scripting language, which will often be more convenient. For easy modification of the geometry, it is useful to start with a definition of the relevant parameters:

//--------------------------------------------------------------------
// Geometry parameters - inputs
//--------------------------------------------------------------------
hp  =   1;              // parent channel height
hd1 = 0.5;              // 1st daughter channel height
hd2 = 0.5;              // 2nd daughter channel height
L   =  2;               // channel lengths
L2  =   1;              // Daughter channel contraction lengths
nCells =  15;           // number of cells in transverse direction
xCells = 30;            // number of cells in streamwise direction
//--------------------------------------------------------------------
// Geometry parameters - calculated
//--------------------------------------------------------------------
midp = hp*Sqrt(3)/2;
lp = hp/nCells;
ld1 = hd1/nCells;
ld2 = hd2/nCells;
dm = 2*midp/hp;
dx = (L2^2/(1+dm^2))^(1/2);
dy = dm*dx;
dx1 = 0.5*((hp-hd1)^2/(1+(1/dm)^2))^(1/2);
dy1 = dx1/dm;
dx2 = 0.5*((hp-hd2)^2/(1+(1/dm)^2))^(1/2);
dy2 = dx2/dm;

Next, we specify the grid points that define the geometry, based on the parameters above. The expression inside the parentheses is the point's ID number; the first three columns inside the braces are the x, y, z coordinates, and the 4th column denotes the prescribed mesh element size near that point.

//--------------------------------------------------------------------
// Points
//--------------------------------------------------------------------
// Junction - triangle
Point(1) = {0, hp/2, 0, lp};
Point(2) = {0, -hp/2, 0, lp};
Point(3) = {midp, 0, 0, lp};
// Junction - contractions
Point(4) = {dx+dx1, hp/2+dy-dy1, 0, ld1};
Point(5) = {midp+dx-dx1, dy+dy1, 0, ld1};
Point(6) = {dx+dx2, -(hp/2+dy-dy2), 0, ld2};
Point(7) = {midp+dx-dx2, -(dy+dy2), 0, ld2};

Between points, the mesh element size will interpolate, so grading could be introduced by placing points at the centre of the channels. The channels can be created by an alternative method, so we'll only define points for the junction/contraction part of the geometry. Points are joined together by lines, with the numbers in braces specifying the two points you wish to connect:

//--------------------------------------------------------------------
// Lines
//--------------------------------------------------------------------
// Junction - triangle
Line(1) = {1, 2};
Line(2) = {2, 3};
Line(3) = {3, 1};
// Junction - contractions
Line(4) = {1, 4};
Line(5) = {4, 5};
Line(6) = {5, 3};
Line(7) = {3, 7};
Line(8) = {7, 6};
Line(9) = {6, 2};

Arcs and splines can also be easily created by specifying a centre of rotation and a number of control points, respectively. Plane surfaces are created using a line loop. For the line loop, the numbers in braces specify, in order, the lines which constitute the perimeter of the surface. The line loop must be closed (i.e. it ends where it begins) and continuous (i.e. a negative line ID indicates the opposite direction).

//--------------------------------------------------------------------
// Surfaces
//--------------------------------------------------------------------
Line Loop(1) = {1, 2, 3};
Plane Surface(1) = {1};
Line Loop(2) = {4, 5, 6, 3};
Plane Surface(2) = {2};
Line Loop(3) = {7, 8, 9, 2};
Plane Surface(3) = {3};

Strictly speaking, lines 2 and 3 aren't needed, and surfaces 1, 2, and 3 could be combined into a single surface. However, by default, Gmsh uses an unstructured mesh and it is often a good idea to partition the geometry to constrain the mesh generation. As the channel sections are straight, it is convenient to use a structured mesh for them; for this, we use the extrude command to translate a line:

//--------------------------------------------------------------------
// Channels
//--------------------------------------------------------------------
pS[] = Extrude {-L, 0, 0} {
  Line{1};
  Layers{xCells};
  Recombine;
};
d1S[] = Extrude {dx*L/L2, dy*L/L2, 0} {
  Line{5};
  Layers{xCells};
  Recombine;
};

d2S[] = Extrude {dx*L/L2, -dy*L/L2, 0} {
  Line{8};
  Layers{xCells};
  Recombine;
};

The extrude command automatically creates all the necessary points, lines, and surface between the chosen line(s) and its translated counterpart. The first set of braces gives the x, y, z components of the extrusion; the second set of braces specifies a) the line(s) you wish to extrude, b) how many cells you wish to divide the extruded surface into, and c) an optional command to recombine the triangular cells into rectangles. The variables pS, d1S, and d2S contain the IDs of the newly extruded line (in e.g. pS[0]), the surface it creates (in e.g. pS[1]), and the joining lines. OpenFOAM requires meshes to be 3D, so we next have to extrude the entire surface by one layer of arbitrary thickness (in the z-direction). This is achieved using the same extrude command, but specifying surfaces as an argument rather than lines:

//--------------------------------------------------------------------
// Unit depth
//--------------------------------------------------------------------
zV[] = Extrude {0, 0, -1} {
  Surface{1,2,3,pS[1],d1S[1],d2S[1]};
  Layers{1};
  Recombine;
};

Similarly to the line extrusions, the variable zV contains all the extruded surfaces and volumes. Finally, we create "physical surfaces" that OpenFOAM will recognise as patches for boundary conditions:

//--------------------------------------------------------------------
// Physical surfaces
//--------------------------------------------------------------------
Physical Surface("inlet") = {zV[21]};
Physical Surface("outlet1") = {zV[27]};
Physical Surface("outlet2") = {zV[33]};
Physical Surface("walls") = {zV[7],zV[9],zV[13],zV[15],zV[20],zV[22],zV[26],zV[28],zV[32],zV[34]};
Physical Surface("topAndBottom") = {1,2,3,pS[1],d1S[1],d2S[1],zV[0],zV[5],zV[11],zV[17],zV[23],zV[29]};
Physical Volume("internalMesh") = {zV[1],zV[6],zV[12],zV[18],zV[24],zV[30]};

Note, all physical surfaces appear in the "boundary" file as type "patch", so you will need to change "topAndBottom" to type "empty". Naming this file "bifurcation2d.geo", we can generate the mesh using the following command:

gmsh bifurcation2d.geo -3

If the mesh is of poor quality, the flag -optimize is useful for optimising the mesh element quality. The flags -clmin float and -clmax float are also useful for constraining the minimum and maximum element sizes, respectively. This will create the file "bifurcating2d.msh". If this file is in the same directory as the OpenFOAM case, then to convert the resulting .msh file into an OpenFOAM mesh, run

gmshToFoam bifurcation2d.msh

If the .msh file is not in the same directory as the OpenFOAM case, the flag -case DIR needs to be used. Below, shows the resulting mesh in Paraview, and the solution for the steady-state velocity profile using the icoFoam solver.

We can easily change the geometry of the network by altering the inputs. For example, changing the size of the daughter channels to hd1=0.7 and hd2=0.3 creates the following mesh and velocity profile solution:

Viscoelasticity and frequency dependent viscosity

Hydrodynamics is characterized by the region of low `(k,\omega)` defined by the wavenumber `k` and frequency `\omega`. Low `k` corresponds to long wavelength `2\pi\text{/}k` which is much bigger than the intermolecular distance, and low `\omega` corresponds to frequency which is much smaller than the reciprocal of collision time. For example consider a viscous fluid to which an instantaneous force is applied, at very small time scales it will give elastic response as the stress will be proportional to strain `\gamma` rather than the rate of strain `\dot{\gamma}`, given as

`\sigma=-E \gamma`

where `E` is the elastic modulus. At much larger time scales, the shear stress `\sigma` is proportional to the rate of strain `dot{\gamma}`

`\sigma=-\eta \dot{\gamma}`

where `\eta` is the viscosity. Historically it was Poisson (1829-1831) who first suggested elastic response of liquids to sudden disturbances, while Maxwell (1867-1873) extended this idea mathematically. The Maxwell relaxation time given by the ratio `\tau_{M}=\eta\text{/}E` defines the time scale below which the liquid behaves like an elastic solid, and for `t \text{>>} \tau_{M}` it behaves like a viscous fluid. Thus viscoelasticity is described by

`\frac{\sigma}{\eta} + \frac{\dot{\sigma}}{E} = \dot{\gamma}`

where `\dot{\sigma}` is the time derivative of stress. This viscoelastic behaviour can be explained by frequency dependent viscosity `\eta(\omega)` instead of usual viscosity employed in linearised hydrodynamics. Molecular dynamics is one such method to investigate hydrodynamics and transport properties in the region of finite `(k,\omega)`. Using equilibrium molecular dynamics we can evaluate the transport properties ( similar to Green-Kubo relation in zero frequency limit ) as

`\eta(\omega) = B_T \int_0^\infty dt \text{ e}^{i\omega t} \langle J_{\eta}(0) J_{\eta}(t)\rangle_{ce}`

where `\eta(\omega)` is the frequency dependent viscosity, `B_T` is the thermodynamic constant, `J_{\eta}` is the associated thermodynamic current and `\langle \cdot \rangle_{ce}` denotes canonical ensemble. The viscosity transport function can be rewritten as `\eta(\omega) = \eta^R(\omega) - i \eta^I(\omega)` consisting of dissipative real part `phi^R(\omega)` and non-dissipative imaginary part `\eta^I(\omega)` given as

`\eta^R(\omega)=B_T \int_0^\infty dt \text{ cos}(\omega t) \langle J_{\eta}(0) J_{\eta}(t)\rangle_{ce}` and `\eta^I(\omega)=B_T \int_0^\infty dt \text{ sin}(\omega t) \langle J_{\eta}(0) J_{\eta}(t)\rangle_{ce}`

Consider a simple model where the correlation function `\langle J_{\eta}(0) J_{\eta}(t) \rangle` undergoes exponential relaxation based on relaxation time `\tau_{\eta}` described by `\text{exp}^{-t/\tau_{\eta}}`, for such a case we can derive frequency dependent viscosity as

`\eta(\omega) = B_T \frac{\langle [J_{\eta}(0)]^2\rangle}{i \omega + \tau_{\eta}^{-1}}`

Assuming that the relaxation time `\tau_{\eta}` can be evaluated at zero frequency, such that `\tau_{\eta}=B_T^{-1}\eta\text{/}\langle [J_{\eta}(0)]^2\rangle`. Using `\tau_{\eta}` we can approximate the frequency dependent viscosity as

`\eta(\omega) = \eta\text{/}(1+\omega\tau_{\eta})`

When time scales are much lesser than `\tau_{\eta}` we get elastic behaviour and when it is much larger than `\tau_{\eta}` we get viscous behaviour, similar to Maxwell's treatment of viscoelastic behavior.

 

Some useful books on droplets research

Since my research focused on droplets from March 2014, I met some very good books on this research field. I would like to introduce them to you in the following:

1. Hans-Jurgen Butt, Karlheinz Graf, Michael Kappl, Physics and Chemistry of Interfaces (Wiley, 2013).

(Note: This book focuses on the essential concepts and intuitive understanding of interfaces. I learned some basic concepts and fundamental theories of droplets from this book. I highly recommend it for beginners.)

2. David Brutin, Droplet Wetting and Evaporation: From Pure to Complex Fluids (Elsevier, 2015).

(Note: This book covers most of the aspects of droplet wetting and evaporation. I think it is a very helpful tool for everyone wanting to learn in this droplet field.)

3. Yulii Damir Shikhmurzaev, Capillary Flows with Forming Interfaces (Taylor & Francis, 2007).

(Note: This book mainly consider how to solve ‘paradoxical flows’ — the missing physics of the interface formation in the mathematical models. It is useful for continuum simulation methods, particularly for boundary conditions.)

4. Z. Q. Lin, Evaporative Self-Assembly of Ordered Complex Structures (World Scientific, 2012).

(Note: An extremely simple route to highly-ordered, complex structures is the evaporative self-assembly of nonvolatile solutes (e.g., polymers, nanoparticles, carbon nanotubes, and DNA) from a sessile droplet on a solid substrate. This book provides a wide spectrum of recent experimental and theoretical advances in evaporative self-assembly techniques.)

Conferences for 2016

Please find below a list of conferences for 2016 which are interesting to our group. The list is not complete and may not include few other conferences as their details are unavailable during the compilation of this list.

Parallelism... to infinity, and beyond!

I'd like to take this opportunity to pose a few questions, along with my own thoughts, regarding the upcoming state of parallel computing and what it means for not just Molecular Dynamics code like the one the MicroNanoFlow group have implemented within OpenFOAM (available publicly soon), but for the way scientists using computational resources are going to need to think in order to keep up as we switch to the age of massively parallel design.

The current High Performance Computing (HPC) landscape for the academic researcher is always changing, but up until recently it is fair to say the scale of the changes have been relatively small. Typically, supercomputing resources get bigger, providing more cores that are faster (or more efficient) and offer bigger/faster pools of memory. There have also always been new and interesting "accelerator" add-ins that provide benefit to specific computational tasks. A good example of these are NVIDIA's line of Tesla GPU processors or more recently Intel's Xeon Phi add-in cards. Historically though, the interesting thing with accelerator add-ins (I’m looking at you separate hardware floating point unit...) is that when they become widely adopted and deemed useful enough, the big CPU architectures tend to absorb them or adopt the concept somehow.

To concentrate on GPUs for a moment, for those unfamiliar with their history, the idea of a dedicated graphics processing chip is not new, it has been around for a long time and ultimately the job of such a device is quite simple, to process simple sets of data as quickly as possible and translate that into pixels that can be viewed on a screen. The best way to do this quickly is in as parallel a manner as possible and store the data in a pool of memory that is as quick and has as big a data bus between it and the processor as possible. This basic premise led to the evolution of GPUs that effectively had hundreds of individual processors, each in itself with low computational capability, but as a whole (when combined with expensive but fast memory and fat data buses) offering an impressive computing potential in a small package.

Of course the only way to "program" these processors was via the graphics libraries they were originally designed to handle (OpenGL, DirectX etc.), however when a few clever people at NVIDIA HQ saw an alternative way forward and released the CUDA suite these GPUs became an accessible tool for many more people, not least scientists solving interesting problems. Of course many codes have not adopted GPU acceleration either because their problem was not suited to the architecture type or because the effort required wasn't deemed acceptable by the developers. However, as things continue to progress it is looking more and more likely that traditional methods of parallelising problems may have to be reconsidered and the often more time consuming methods such as OpenMP given more thought.

Since CUDA's release in around 2007 the state of the GPU market has evolved rapidly, as has CUDA and other GPU programming methods. We are now at the stage where top end GPU devices such as the Tesla K80 offer a potential maximum of well over 4000 "cores" and 12GB of extremely fast memory in a single package consuming less than 300W. Of course this has sparked competition and the likes of Intel have seen the potential for this different way of looking at processor design, which brings to light the Xeon Phi range.

These cards currently offer a smaller number of "fatter" cores (currently this stands at 60, each of which can handle up to 4 threads) than a GPU but on paper both have similar processing potential. There are also a number of new programming paradigms being fuelled by these architectures, with traditional methods such as OpenMP being forced to rapidly evolve at the hand of Intel so they become more suitable for the likes of the Phi while other efforts such as PGI/NVIDIAs OpenACC are quickly turning into useful standards with high quality associated libraries and tools. OpenACC is effectively an OpenMP-like alternative that aims to target multiple platform types, from multi-core CPU through to GPUs. Also of interest are the IBM Power8 and upcoming Power9 architectures.

The interesting thing of all this is that there is a clear trend from the traditional design of having a small number of very fat CPU cores in a single package (aka the current Intel Xeon and AMD offerings) to providing finer-grained parallelism within the same package. The clearest case in point is that the next version of Intel's Xeon Phi will be a fully system-on-chip design named Knights Landing (https://software.intel.com/sites/default/files/managed/e9/b5/Knights-Corner-is-your-path-to-Knights-Landing.pdf) instead of an add-in card. Effectively, instead of a machine having a typical Intel Xeon x86 CPU with 8 or so very powerful cores, it will have access to 60-100 cores that are individually much weaker but when combined more powerful.

Software that has previously been written to work across multiple shared memory processors using something like OpenMP will automatically work on this new style of Intel CPU, but it is likely many will see a decrease in performance unless they then spend time better refining things to take advantage of vectorised operations as well as ensuring the code actually scales in parallel terms. Codes which use only MPI for their parallelisation may struggle even more...

So what do this mean for future software designs? First and foremost it means all future scientific software should be designed from the ground up to exploit parallel execution. No ifs, no buts, serial processing is no longer an option. That aside, there are an ever growing number of ways of thinking serially when designing an algorithm and letting other software handle the transition. These range from automatic methods (such as auto-vectorisation employed by compilers) through to domain-specific languages that hide all of the complexities of making code that is optimised for any one hardware type (though somebody still has to do the work at some point...). The best methods for most though are the ones somewhere in the middle, the likes of OpenACC or OpenMP.

Using these we can still program in normal languages but take advantage of future architectures by way of simple "pragmas", or directives, placed in code. We can also use these methods to help accelerate existing codes on these new architectures. The nice thing is that as long as everything we do is standards compliant, as underlying libraries upgrade and evolve, providing better performance or compatibility with new architectures, so too will the software built upon it. The most important factor in these designs is often an understanding of designing memory structures suitable for parallelisation. This is especially important for methods like OpenMP or OpenACC where a memory structure only suited for serial execution can even lead to slow down once parallel execution is attempted. Not all problems have an obvious solution to this, in those cases sticking with multi-process parallelisation methods like MPI may still be the better way forward.

In summary then, the message is becoming increasingly clear, when designing software there is always likely to be the concept of distributed computing because the amount of processing you can physically locate in an enclosed space will always be limited, therefore methods like MPI are unlikely to go anywhere (though they may be slowly absorbed into the underlying fabric of compilers and other related parallelisation tools we use so we no longer need to explicitly worry about where data is). However, the level of parallelism we are exposed to within each shared memory resource is increasing, rapidly. Relying on one form of parallelisation such as MPI is becoming less and less sufficient and so mixed parallelisation strategies that incorporate techniques such as OpenMP or OpenACC with distributed (i.e. MPI) based methods is likely the best approach for a future proof code. That way we can make good use of fine-grained parallelism (be that via the host processor or using an accelerator such as a GPU) while still being able to distribute the problem between discrete pools of memory.

Implementation of volume averaging in OpenFOAM

Nano-confined flows in presence of the wall form an inhomogeneous system, the potential part of the pressure tensor, `\mathbf{P}^{U}(\mathbf{r})` for such an inhomogeneous system based on Irving and Kirkwood formulation is

`\mathbf{P}^U(\mathbf{r}) = -\frac{1}{2} \langle \sum_{i}^{N} \sum_{i\ne j}^{N} \frac{\mathbf{r}_{ij}\otimes\mathbf{r}_{ij}}{r_{ij}} \frac{d U(r_{ij})}{d r_{ij}} D(\mathbf{r},\mathbf{r}_{ij}) \rangle`

where `\mathbf{r}_{ij}=\mathbf{r}_j - \mathbf{r}_i` and `r_{ij} = |\mathbf{r}_{ij}|` and `D(\mathbf{r},\mathbf{r}_{ij})` is the Dirac delta functional given as

`D(\mathbf{r},\mathbf{r}_{ij})= \int_0^1ds \delta(\mathbf{r}_i - \mathbf{r} +s \mathbf{r}_{ij})`

The Dirac `\delta(\mathbf{r}_i - \mathbf{r})` function gives the probability per unit volume of the i th molecule to be at `\mathbf{r}` at time t. The variable `s\mathbf{r}_{ij}` `(0\le s\le1)` gives a location of the molecule from `\mathbf{r}_i` to a location between the segments endpoints `\mathbf{r}_i` and `\mathbf{r}_j`. Thus the potential part manifests through molecular interaction between two particles and hence it is spatially non-local for domains of the order of the range of intermolecular forces. For an infinitesimal area the interacting pair whose line of centre meets at a point within the area contributes to the pressure tensor. Volume averaging method is one of the way to compute this spatially non-local potential part of the pressure tensor by localizing stress in an arbitrary shaped volume.

Implementation of volume averaging in OpenFOAM is quite easy due to its large base library, for example tensor and field operations are used to construct the potential part of the pressure tensor by an overloaded operator "*" which represents dyadic operator `\otimes`. The molecules are represented by an object "atomisticMolecule", contribution to potential part of the pressure tensor from each pair (I,J) of molecule is given by mol_rf=fIJ*rIJ, where fIJ and rIJ are the force and separation vector for pair (IJ). To cut down the computation cost the pressure tensor calculation needs to be looped over the molecules, updating each bins with the contribution made by the corresponding interaction pair of molecules. Following is the code snippet of the volume averaging method for slab bins as shown in the figure

  
//looping for all molecules I
for (int i = 0; i < Total_mols; i++)
{
molI = Occupancy[i];
// population of molecules in each bin
get_bin_pop(bin_pop);
for (int j = 0; j < Total_mols; j++)
{
//looping for all molecules J > I
if(j > i)
{
molJ = Occupancy[j];
evaluate_force(molI, molJ, fIJ);
vector rIJ = molI->position() - molJ->position();
// potential part as dyadic product of fIJ and rIJ
mol_rf = fIJ*rIJ;
scalar mol_delz = molJ->position().z()-molI->position().z();
get_bin_molIJ(k_top,k_bot,bm_topz,bm_botz);
for(int k=k_top+1; k < k_bot; k++)
{
bin_rf[k]+= mol_rf*bin_delz/mol_delz;
}
//end points of the segment rIJ in k_top and k_bot
bin_rf[k_top]+=mol_rf*bm_topz/mol_delz;
bin_rf[k_bot]+=mol_rf*bm_botz/mol_delz;
}
}
}
//bin averaging
for(int k=k_top; k < k_bot+1; k++)
{
bin_rf[k] = bin_rf[k]/bin_pop[k];
}

The potential part of the pressure tensor contribution to the kth bin is updated in tensorfield bin_rf and bin_topz, bin_botz are the segments in the partially intercepted first and the last bin as shown in the figure.

 

OpenFOAM in your browser

Check out SimScale, a very cool web-based platform that uses OpenFOAM as its backend. The free service for academics provides 32 cores, 50 GB of storage, and 1000 core hours/month.. pretty good for £0! At the moment it doesn't seem to support mdFoam or dsmcFoam, but it can handle traditional CFD. There are quite a few simulations shared by the SimScale community to have a play with.

A similar sort of platform would massively extend the outreach of our codes.

Gmsh - easier mesh creation

Recently I came across Gmsh, a light and user-friendly tool that can be used to create meshes for OpenFOAM (and others). There's plenty of documentation outlining how to use Gmsh, so I'm only going to give you the two commands you need to populate the polyMesh directory using your Gmsh example.geo file. Begin with

gmsh example.geo -3 -o example.msh

followed by (in the directory where exampleCase is located)

gmshToFoam example.msh -case exampleCase

That is all! Everything in exampleCase/constant/polyMesh has been created. No need to run blockMesh, snappyHexMesh etc.

NanoCap: Open source software for generating capped CNTs

I would like to share briefly my expereince using NanoCap, an open source software tool, to generate capped (closed-ended) carbon nanotubes (CNTs) that I am currently using in my research work. Generating open-ended CNTs is quite easier than capped ones as the former has a generic equation to generate the XYZ coordinates. For example, we have tools in mdFoam to create CNTs but there exists no functionality in mdFoam to generate the capped CNTs, except by trying to combine them with open bucky-balls. So I have implemented a functionality in mdFoam that can now read-in the capped CNTs that are generated first using NanoCap.

In addition to capped/uncapped CNTs, NanoCap can also generate various fullerene configurations. NanoCap can be downloaded from http://nanocap.sourceforge.net. I have used the Python interface to NanoCap to create capped CNTs of different radii as shown in the below figure. NanoCap can easily create capped CNTs upto 2nm radius.

Interesting Images

As a keen photographer, I always like to see images of research that can capture the attention of an audience. I thought I would share a few striking images that I've come across that relate to micro flows...

The first image above was produced by researchers at MIT and the Weizmann Institute of Science (WIS). They found that the movements of hair-like cilia on corals actually help draw in nutrients to the coral and drive away waste by creating vortical flows at the surface. More information on this work (and an interesting video) can be found in the article 'Nature's Tiny Engineers': http://news.mit.edu/2014/corals-engineers-0901

We know that the manipulation of drops could be useful in a variety of applications, for example in the pharmaceutical or food industries. The second image comes from a research poster produced by Weyer et al. of the University of Liege. The poster describes a simple method for building a complex drop using a crosswise array of fiber: essentially, a large drop of oil runs down the vertical fiber and encapsulates individual dyed water drops at each intersection. The poster itself can be found at: http://gfm.aps.org/meetings/dfd-2014/5416e85769702d585cc80100

The third and final image shows a prototype of a ferrofluid ion thruster. Researchers from Michigan Technological University have proposed the use of ferrofluids (which consist of a carrier liquid and ferrous nanoparticles and so are magnetically sensitive) in microspray thrusters that could be used to propel nanosatellites. A magnetic field is applied to a ring of ferrofluid on the thruster surface to create the ferrofluid's characteristic spikes -  thrust is then created by the jets of ferrofluid that spray out from the tip of each spike when an electric force is applied. Compared with typical microspray thrusters that spray jets of liquid through hollow needle-like structures, the ferrofluid thruster is considerably more robust. More information can be found at:http://www.mtu.edu/news/stories/2013/august/cancer-treatment-ion-thruster-newest-little-idea-for-nanosat-micro-rockets.html
 

Presentation at ICNMM 2015

This is a video of the talk I gave at the 13th International Conference on Nanochannels, Microchannels, and Minichannels (ICNMM2015), San Francisco, (7th July 2015). 

 

A scientific meeting on ‘Nanostructured carbon membranes’ held at the Royal Society

On 27 and 28 April 2015, a scientific meeting, ‘Nanostructured carbon membranes for breakthrough filtration applications: advancing the science, engineering and design’, was held at the Royal Society at Chicheley Hall, home of the Kavli Royal Society International Centre, Buckinghamshire. Prof. Jason Reese, Dr Duncan Lockerby, and Prof. David Emerson were three of the organizers.

Nanostructured carbon membranes offer outstanding potential for efficient desalination and wastewater treatment that can help address the world’s water scarcity problems. This meeting brought together the top researchers in the world in the area of carbon membranes, with the goal of addressing outstanding challenges in commercialising this revolutionary technology. In total, 15 talks were given, and the topics ranged from fundamental understanding of the nanofluidics, to design and manufacture of actual membranes.

The title of talks and the name of speakers could be found following this link: https://royalsociety.org/events/2015/04/nanotube-membranes/

In the near future, a video or audio record may be delivered in the website of the Royal Society: https://royalsociety.org/events/?type=all&direction=past&video=yes

The problem with object oriented programming...

... or, "Why object oriented C++ is sometimes considered a bad choice for scientific programming and you need to be careful when using OpenFOAM".

Following on from my previous blog entry (Determinism(?), 16.01.2015), I am still embarking on a quest to improve the performance of the MicroNanoFlow group MD code (MNF-MD code). I have been focussing my efforts on baseline optimisation as if the basic serial performance of the code is well optimised then more complex parallel implementations will be building on good foundations. 

There are two branches to consider to code optimisation. The first is the general overhead of the programming style of your code (i.e. how object orientation is used, the choice of data structures and how well memory is managed etc.) and the second is the overhead of the algorithms themselves. As I am approaching this code from a programmers perspective the most obvious first port of call is the general programming overhead. 

One way to find hotspots and performance bottlenecks is to profile using a simple and relatively small test case, run for a reasonable number of iterations. The use of a small case reduces the impact of the algorithmic calculations and allows the underlying coding overhead to be revealed. So with this in mind, I ran the MNF-MD code through GNU's GPROF profiling application (after recompiling with -pg) and got some interesting results!

Looking at the flat profile:

What does this tell us? Well, firstly we can see that GPROF is struggling a little with the complex object-oriented (OO) nature of an OpenFOAM application... but that aside we can see that lots of our time is spent calling destructors, we can also see that there are many calls to the function eraseHead(). This is all very interesting as it points to the fact that lots of objects are being destroyed (and therefore created) throughout the life of the application, this is also pointed to by the large number of calls to eraseHead() as this is an internal function used within the OpenFOAM dynamic data structures during a resizing operation (be that creation, destruction or something else). 

This is an interesting (and worrying) story for the performance of the software as, ideally, an MD code should create its major data structures once and then manipulate that memory during the life of the simulation, finally destroying them on exit. This is true because most MD simulations are (more or less) fixed in size based on initial input parameters, sure there may be cases of molecules being inserted or deleted but typically the size of the holding data structures won't change that often.

On closer inspection the reality is that a single main data structure is created in the form of a "polyCloud" (this is a doubly-linked list if anybody is interested) and from this multiple dynamic structures are cleared and then re-populated, often at each time-step, to provide an easily accesible list of neighbouring molecules. On the surface of things this is a sensible approach, certainly it follows the examples set by existing OpenFOAM code, but as any high-performance scientific programmer will know, the act of continuously clearing and then resizing dynamic structures is something to be avoided if possible, and here's why:

When memory is allocated in a typical C application this is done using "memset", I won't go into the detail of memset as most will be familiar however the basic idea is to provide a pointer to store the starting memory location of the block being allocated (which also defines the primitive type) and a size in bytes. This provides a contiguous block of memory (i.e. one without any gaps) which (again without going into too much detail) is typically beneficial to application performance. The C++ equivalent would be a call to "new", which is effectively the same thing as calling "memset".

However, this is where the object-oriented approach can sometimes mislead and result in lower performance applications than traditional C (or Fortran) code. When a block of memory is allocated in a single "memset" or "new" call it is contiguous, so (for example) we might allocate a number of arrays equal in size to the number of molecules in the simulation to store their various properties (i.e. position, velocity). However, in the object-oriented OpenFOAM equivalent this might take the form of a container data structure (say a List) which is then populated by as many Molecule objects as there are molecules in the simulation by way of iterating as many times we there are molecules and using the "append" function provided by the data structure.

This is great you are probably thinking, it means a) the data structure resizes itself as needed and b) each element is a compound object of type Molecule that contains all of the data we need in one place and you'd be right... except for the following: Every time "append" is called a number of operations need to happen to resize the block (which in cases where contiguity is to be preserved would involve creating a totally new data structure and copying everything from one to the other before the old one is deleted) and every single element in the structure is created using "new" (as they are objects) meaning that the actual data for each of the Molecule objects could be anywhere in memory, there is no gurantee that consecutive calls to "new" will provide the next block of memory from the end of the last meaning we end up with our Molecule objects fragmented throughout the memory address space.

So OK, we can see that from a memory layout perspective the OO approach may not be ideal for high-performance applications, but there is a level of flexibility that it provides, plus with careful programming we can improve this anyway (i.e. OpenFOAM provides a set of FixedList structure for cases where the size of the data structure is known beforehand), however there is also a non-trivial level of computational overhead associated with creating an object compared to something simpler (say a struct) or of course simply using a primitive type. Now if this overhead were only encountered once at Molecule initialisation then it could be overlooked, however we can see from our profile that in fact there is a significant amount of object destruction (and therefore instantiation) going on here, especially of List objects. This is bad, it means that the act of memory allocation and deallocation is actually a dominant overhead, and in a simulation software that effectively relies on a fixed problem size, this cannot be optimal. 

And so onto the crux of the problem and why programming in the OpenFOAM style has to be done so carefully. I mentioned that a single data  structure is created as part of the polyCloud once at startup and then multiple dynamic structures are repeatedly cleared and repopulated from this. Well this in itself is a large contributing factor to the overhead seen in the profile, many of those calls to "eraseHead" stem from this.

However, there is also another issue to consider which perhaps doesn't revel itself in the GPROF flat profile, as part of the process of populating some of these structures we see something like "append(molecule.clone())", this little innocuous call has great implications, what it actually means is that, rather than a pointer being stored to the existing Molecule object from our polyCloud structure, a totally new copy is created (including all of the instantiation overhead associated with that), this is then used for the lfietime of the structure (i.e. until the next time "clear()" is called) at which point it has to be destroyed and so the cycle continues. This is the essence of the danger of high-performance OO programming, it becomes second nature to create and destroy objects while forgetting how much overhead is actually associated with that (not to mention the fact that associated memory with the application is constantly being scattered around, destroying potential speed-up benefits of CPU cache coherency etc.).

There are no real conclusions to this post other than to say the current data structures in the MFN-MD code are being reconsidered. The ideal will be a situation where only a single instance of a Molecule object is ever created and as many data structures are fixed in size for the lifetime of the simulation as possible. Clearly not the easiest of tasks but I'll report back next time and provide some before and after comparisons!

Problems with popular crystal structure software

In order to successfully apply molecular dynamics simulations to real world problems, the systems under investigation must be represented by atomistic (or sometimes coarse-grained) models. Creating these models requires the knowledge of the initial positions of the atoms (or coarse-grained particles).

Many nanofluidic systems often require the modelling of a substrate in contact with a fluid. To model fluids, the initial positions of the atoms/particles can be randomly selected. In many cases, the substrate is usually some type of crystal structure. For example, I have been using a crystal substrate of alumina (aluminium oxide) in my systems.

Most of the time, one can use crystal structure software in order to quickly create these types of substrates. Most software usually uses .cif (Crystallographic Information File) files as input (the American Mineralogist Crystal Structure Database is a good source). The .cif files contain the structural information needed to create the unit cell of the crystal, such as the initial coordinates for the assymetric unit cell, the unit cell dimensions (lengths and angles) and the symmetry group operations for the particular type of crystal. Once the unit cell has been created, a substrate can be quickly built by replicating the unit cell any number of times in any dimension. In the past, I have used a piece of open source software called Avagadro to check the crystal structure of substrates I have built with my own codes, but recently, I found another piece of software called Mercury (free license version).

So, it was to my surprise that after using the same CIF input to both codes, two very different unit cells were calculated. To highlight the major differences between the output from both codes, the CIF for iron oxide has been used as input:

 

The image on the left is from Avagadro and the image on the right is from Mercury. This is worrying, as both should, in theory, calculate the same unit cell. From a simple visual inspection of both structures, you can tell they are very different...so the question is, which software has produced the correct unit cell?

The answer, is neither! While Avagadro preserves the stoic ratio of the crystal (this is very important, as it maintains charge neutrality), it calculates many overlapping atoms (mainly oxygen atoms), which is clearly incorrect. While the structure calculated by Mercury appears to be visually okay, i.e. no overlapping atoms, it seems to calculate a somewhat random number of atoms in the unit cell, not preserving the stoic ratio... I still have not been able to understand how or why Mercury calculates either the total number of atoms or the number of each atomic type.

So the take home message from this post should probably be not to simply trust the output of software. I normally write my own codes to calculate unit cells and then replicate to create substrates, so luckily, these "bugs" have not troubled me. Also, it should be noted, that not every type of crystal has been tested as input to both codes, so both may well produce indentical outputs for other crystals. It was just that I happened to stumble upon the differences for iron oxide!

CAD based initialization of molecular zones

I would like to share a CAD based method developed by us for initialization of molecular dynamics simulation. In this method CAD or 3D object is placed in the molecular zone leading to creation of
positive and negative CAD zones. The positive CAD zone is the molecular zone occupied by the
CAD / 3D object while the negative CAD zone is represented by the rest or unoccupied molecular zone.

For example, consider a sphere object in the molecular zone as shown in the figure. The negative CAD
zone is filled by the water molecules while the positive CAD zone is filled by the C540 Fullerene molecule.
The negative CAD filling can start from a pre-determined position from the wall based on fluid-wall interaction

In a second example, consider a CAD object representing a nano-nozzle as shown in the figure. Here the positive CAD zone is filled with simple lattice of solid Argon while the negative CAD zone is occupied by the Argon gas.

The CAD based initialization once incorporated in mdFOAM code can be of good help
to our group in carrying out molecular dynamics simulation for complicated geometries.

Running large MD simulations on ARCHER

In a recent blog titled "MDFoam on Archer" by Saif, it was announced that our team has migrated MDFoam code from OpenFOAM-1.7 to OpenFOAM-2.1. They have also run few simulations to test the performance of our MD code on ARCHER and found that the code scales well which is a happy news for all those like me who wants to run large simulations on ARCHER.

Here I would like to share some of the information that I learned while trying to run some of my large simulations, on ARCHER, which take more than a week to complete. It is important to know that ARCHER has a runtime limit of 24 hours on regular queue (48h on long queue) which means that any simulation that runs for more than a day will be terminated by the job scheduler (ARCHER uses PBS for job scheduling). In this case we need to restart our simulation from the last timestep where the simulation is stopped. If it is only one or two simulations that one would like to run for a week then one way is to try resubmitting the restart job every other day for 7 days until the simulation finishes. However if you have several simulations to run for weeks then resubmitting the restart jobs manually could become tedious. So one solution is to automate the job submission and use check-pointing to save current state of a simulation when it is terminated by the scheduler. Below is my PBS script that I used to test automating restart job submission. Please feel free to use the script as per your needs but I highly recommed you to test it on small simulations before using it to run large small simulations to make sure that you get what you expected .

The other two important points in order to use this script is to: i) enable checkpointing by using startTime lastestTime; in your controlDict file. If checkpointing is not enabled then MD simulation will always use the same timestep 0 (if startTime 0 is used) to restart the simulation and it will never finish. ii) load module leave_time utility by running the command module load leave_time. To conclude, this script can be used on any HPC system as long as you can use a utility such as GNUs timeout instead of leave_time command to kill the mdPolyFoam application before the runtime limit.

########## Beginning of the script ##############
#!/bin/bash --login
#PBS -N Test
#PBS -l select=2
#PBS -l walltime=00:24:00
#PBS -A d70
#PBS -j oe
#PBS -V

module load leave_time

export PBS_O_WORKDIR=$(readlink -f $PBS_O_WORKDIR)
# change to the directory that the job was submitted from
cd $PBS_O_WORKDIR

# run mdPolyFoam using 48 cores
# leave_time is ARCHER utility (used GNU timeput) 
# it kills mdPolyFoam exactly at 23h:59m:30s
leave_time 30 aprun -n 48 mdPolyFoam -parallel

# restart simulation if not completed
# also backup any data that is written by MDFoam
# please note that I use file "processor0/timings/cpuTimeProcess_evolve_average.xy"
# to get the current time
# so make this file exists for your case else use another file to get the current time
finishTime=$(grep -m 1 '^endTime' system/controlDict | awk '{print $2}' | rev | cut -c2- | rev)
currentTime=$(awk 'END{print $1}' processor0/timings/cpuTimeProcess_evolve_average.xy)

if [[ "$finishTime" -ne "$currentTime" ]]; then
# backup/copy information (Note: use backup only if needed)
mv processor0/fieldMeasurements processor0/fieldMeasurements-$currentTime
# after backup submit restart job 
qsub submit.pbs
else
echo "Simulation is completed!"
fi

########## End of the script ##############

Successful EPSRC grant proposal to access ARCHER supercomputer

Matthew Borg (PI) and Jason Reese (Co-I) have successfully won £10k (equivalent) funding from an EPSRC Resource Allocation Panel to run large-scale MD simulations on ARCHER, the latest UK National Supercomputer (Tier 1). The project is entitled “Parallel Molecular Dynamics Simulation of Nanoscale Desalination Membranes” and will run for 6 months starting from 1st March 2015. The amount of computing resource allocated to this project is 16MAUs. Checkout the full list of successful applicants here.

The aim of this project is to enable and produce large-scale molecular dynamics (MD) simulations of water within desalination membranes consisting of aligned carbon nanotubes (CNTs). This desalination problem is of great interest because experiments on water flow in CNT membranes have indicated several orders of magnitude of flow enhancement over conventional membranes, which could mean reduced energy costs and environmental impact for future reverse osmosis plants.

Continuum Simulations of CNTs

My paper has been cited for the first time! It was cited by Jamali and Shogh in their paper titled "Computational fluid dynamics modeling of fluid flow and heat transfer in the central pore of carbon nanopipes".

They investigate the causes of the flow rate enhancement that has been found both experimentally and through particle simulations in nanopipes, such as CNTs. To do this they vary a range of different parameters, such as length and diameter of the pipe, properties of the fluid (such as temperature), and the liquid-wall interaction through the slip length. As expected from their use of the Navier-Stokes equations, their simulations suggest that of the parameters tested the partial slip boundary condition at the wall is the only factor that could explain the fast flow rates that have been measured in CNTs.

There have now been several studies using CFD simulations performed to replicate CNTs for various diameters. Popadic et al., Huang et al. and I have also performed CFD simulations and compared the results with MD simulations. In each case they have managed to achieve fair agreement in flow rates and/or streamwise properties of the flow. In addition to CFD simulations, there have been attempts to modifiy the classic Hagen-Poiseulle equations (Mattia et al. and Sisan and Lichter) to include some of the effects, such as the large pressure drop that occurs at the inlet and outlet of the pipe.

There have been other factors suggested to explain the higher than expected flow rates found in nanotubes, like the structure that the liquid molecules form within the pipe. These physical phenomenon are not included in CFD simulations, this suggests that the major factor that causes the high flow rates (for nanotubes above the continuum limit) is the slip at the wall.

MDFoam on Archer

Recently our team Dr Matthew Borg, Dr Stephen Longshaw and myself, began to migrate MDFoam code from OpenFOAM v1.7 to OpenFOAM v2.1. Our Lagrangian particle tracking algorithm differs from original OpenFOAM source code also we have developed significant functionality for Multi-Scale Flow Engineering simulations, which is not present in original OpenFOAM source code. Moreover we plan to run large scale MD simulations using Archer UK national supercomputer, this blog post presents recent performance results on Archer.

Large Scale Simulation on Archer

Beginning October 2014, I started working on optimising OpenFOAM code to enable running large scale simulations on Archer, this project is supported by Archer embedded CSE support (eCSE). The plan is to scale our simulations on Archer to eventually enable our researchers to obtain realistic results in quick time. Presently parallelism within and outside nodes in OpenFOAM is handled by MPI, so I plan to replace existing pure MPI parallelism with mixed mode MPI/OpenMP parallelism. 

Each Archer node contains two CPUs producing 24 cores, each CPU with 12 cores also could be Hyper threaded to 48 threads in all. Accordingly parallelism within nodes using MPI will initiate communication between cores therefore one way to bring better performance on Archer is to reduce MPI communication with multi-threading within nodes and MPI for communication across nodes.

Performance

A performance graph is presented with two species of simulations run on Archer in figure 1, one simulation contains nitrogen molecules and another with water molecules. The problem sizes differ as well, nitrogen is simulated with 33696 atoms and water with 256000 atoms. The two different sizes are presented to show scaling behaviour on problem sizes and cores. 

Due to a medium sized atoms in Nitrogen simulation, unlike water performance has not increased significantly on 48 cores. Typically some cores could be ideal and performing less or no work, however in case of water due to larger number of atoms there is work for every core resulting in obtaining better performance. Accordingly simply consuming larger number of cores does not benefits performance in fact problem size also has to be appropriate.


Figure 1: Performance of MDFoam on Archer

The performance chart presented is for pure MPI based parallelism, meanwhile performance on latest work for mixed mode parallelism will be added soon. Lastly, If your research work is similar to work our research group does and if you are interested in using our code, please feel to contact Dr Matthew Borg.

Determinism(?)

I recently began the task of working out why a piece of MD code ran slower for the same problem than an almost identical piece of code. Clearly the issue was down to compiler optimisation being circumvented in the slower code, but it was surprising quite how much of an effect this had on performance. 

Like any good coder, the first thing I tried was the easy option, would a later version of the same compiler have better optimisation methods? The slow down was initially seen using GCC 4.4, used for historical reasons, so I re-compiled using GCC 4.8. Annoyingly the performance difference remained, even more annoyingly the simulation results were now completely different than with 4.4, overall sytem energy was similar but every single molecule had a different position and different associated properties after not many time-steps. 

This unexpected result led me down a rabbit hole, forcing questions to be answered regarding the nature of determinism in software and the best way to ensure repeatable results.

It isn't unusual for different compiler versions to produce different floating point results, particularly if using unsafe optimisations or different hardware, but given I had compiled both of these applications with the same flags (even down to specifiying the architecture as "core2", the highest available in GCC 4.4), no unsafe optimisations were in place and everything was running in serial, I found the whole thing worrying. 

You may now be thinking "but the overall system energy is the same, MD simulations are typically inherently random anyway, what's the issue here?", well, and this may be personal, I believe that a computer simulation should be controllably deterministic. By this I mean, if I write a scientific paper and specify the computing architecture and version of the software used, along with the input parameters, then the reader should be able to repeat my experiment exactly. Arguably, this could be extended to say that the compiler and specific CPU used could be defined and if the reader uses anything other than this then they can expect different results, but with my computer scientist hat on this does not feel like good, robust simulation code. It is unlikely that everybody will have access to exactly the same compiler or CPU type (though their architecture may be same, i.e. x86), it is reasonable to be able to define the compiler to be GCC, but not to specify that it must be GCC 4.4.7 for things to work as expected.

So what has changed between GCC 4.4 and GCC 4.8 to produce such different code? To cut a very long story short, I am not yet at a final answer. What I can say is that differences still appear if I compile at any optimisation level (i.e. -O1, -O2, -O3) and even without optimisation, however, GCC has an interesting compilation option:

-ffloat-store Do not store floating point variables in registers, and inhibit other options that might change whether a floating point value is taken from a register or memory. This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a "double" is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use -ffloat-store for such programs, after modifying them to store all pertinent intermediate computations into variables.

When this is used with GCC 4.8 at any optimisation level, the code produces identical results to that produced with GCC 4.4 without the same flag. Clearly GCC 4.8 introduces a slightly different algorithm somewhere that either uses or no longer uses the excess precision available in the FPU. Exactly which of the new options does this has yet to be determined but it raises an interesting question, given the results produced by GCC 4.8 are probably more accurate (though this cannot be guranteed), yet many users of the code in question are still using GCC 4.4 as this is what the previous version required, is it better to reduce the quality of the output compiled with later GCC versions or to make the user base aware and suggest they upgrade their compiler? I'll let you know if I come to a conclusion!

To wrap up this post, another determinsim related issue has cropped up during my investigations, how best to use random values in scientific simulations.

True randomness does not really exist in standard computations (we'll ignore quantum computers for now, producing truly random values is a scientific discipline of its own), when you call "rand(seed)" in your software, you are producing a deterministic set of pseudo-random values based on the input seed. Often it seems sensible to seed this using a non-specific value such as the time so that a random effect is apparant to the user as the sequence will change depending on when they run the software. The problem with this when it comes to scientific software boils down to repeatability. Results published in a paper should be repeatable by the reader. The only sensible thing therefore is to define the seed used and publish this along with the results. 

Finally, a word of warning for those developing with OpenFOAM, a class called Random is provided as a primitive. If you want to produce random values it seems sensible to use this. However, it is very important to understand how this class actually works. It is simply a wrapper around system level random functions, so when you instantiate a copy in your code and provide it with a seed, it is actually calling the standard system level seed function, meaning ALL instances of Random are effectively re-seeded by the last instance to be created.

This won't be an issue if you create an instance, use it immediately and then let it die or lay dormant, however if you create one instance of Random as part of one object and then again as part of another but then use it from the FIRST object created, the pseudo-random sequence will based on the seed you passed when creating the SECOND object.

It is also worth noting that, by default, the Random class uses the oldest and (arguably) worst of the inbuilt POSIX random number generators unless you compile with the flag "-DUSE_RANDOM", in which case it uses the best, for reference, there are three available by default on most systems:

  • rand48() family of methods: These functions generate pseudo-random numbers using the linear congruential algorithm and 48-bit integer arithmetic (this is used by OpenFOAM by default).
  • rand(): This function returns a pseudo-random integer in the range 0 to RAND_MAX inclusive (i.e., the mathematical range [0, RAND_MAX]).
  • random(): The random() function uses a nonlinear additive feedback random number generator employing a default table of size 31 long integers to return successive pseudo-random numbers in the range from 0 to RAND_MAX. The period of this random number generator is very large, approximately 16 * ((2^31) - 1) (this is used by OpenFOAM if compiled with -DUSE_RANDOM).

So work continues to make sure the code in question finds a good balance between reliable determinism and performance!

Wetting properties at the Nanoscale

A brief introduction of the modified Young's equation for nanodroplets.

The contact angle, between the liquid-gas interface and the solid surface is used to describe the wetting property. Form thermodynamic and mechanical point of view, the contact angle is a balance between the surface tensions of solid-liquid, liquid-vapor, and solid-vapor. This relationship is the well-known Young’s equation. 

However, the wetting of nanodroplet is not only determined by the balance of surface tensions, but also by the tension of the line where three distinct phases coexist. By analogy with surface tension, which is defined as the excess free energy per unit surface of an interface separating two phases, line tension is the excess free energy per unit length of a three-phase contact line. A positive line tension make the droplet shrink, while a negative line tenison make the droplet spread. See the following picture:

                                              

To take line tension into account, Young’s equation has been modified as

                                          

Till now, the sign and magnitude of line tension remain controversial in literature. We have obtained some results for Argon, water and salt water nanodroplets. Hopefully our results will be published this year.

                                             

Multi-level parallelization and hybrid acceleration of simulation codes

While on my way back from a stimulating research meeting at Glasgow I read an article "The world’s fastest software for Molecular Dynamics on CPUs & GPUs" from the Swedish e-Science Research Centre website.

The article described the parallelization strategy of GROMACS and highlighted the challenges in parallelization posed by hardware which is continuously becoming more heterogeneous. This is true, in recent times processor chips with multiple cores have become a common feature, for example hundred of cores are available in some GPUs. Simulation codes are also trying to keep pace with the development in multi-core systems, for example "Molecular Dynamics Simulation of Multi-Scale Flows on GPUs" carries out hybrid acceleration of MPI based OpenFOAM to harness the computational power of GPUs.

It is expected that the increase in number of cores per chip will be relatively much faster than the increase in the processor clock speed. In such a scenario, MPI which is the de-facto standard for large-scale scientific computation will face a stiff competition from hybrid parallelization based on process level MPI and threads-level OpenMP. Most of the simulation codes still rely on MPI library for its parallelization. For example in OpenFOAM, MPI library is plugged through the "Pstream" interface where all parallel communications are wrapped while its "decomposePar" utility implements the domain decomposition. An interesting article "Evaluation of Multi-threaded OpenFOAM Hybridization for Massively Parallel Architectures" reports a case where MPI communication becomes a real bottleneck to scalability, and a hybrid multi-threaded approach is suggested as a possible solution.

The multicore evolution has certainly left the memory and cache subsystem to lag more and more behind, with non-uniform memory access (NUMA) effects this can indeed become a performance bottleneck. Multi-level parallelization strategy addresses the NUMA and communication related issues by combining MPI (distributed-memory) and threads (shared-memory) for efficient intra-node parallelism. If we look at GROMACS it implements multi-level parallelization as well as hybrid acceleration by:

  1. SIMD (single-instruction multiple-data) parallelization at the instruction level,
  2. OpenMP between cores within nodes,
  3. MPI between nodes,
  4. employing hybrid acceleration such that GPUs carry out compute intensive part (non-bonded force calculation) while rest is run on CPUs.

To know more on that refer Pronk S et al. (2013) "GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit" Bioinformatics, Vol. 29, pp. 845-854.

GROMACS take advantage of both MPI and OpenMP programming models. That is why GROMACS has been identified by PRACE1 pan-European HPC initiative and the CRESTA2 exascale collaborative projects as a key code for effective exploitation of both current and future HPC systems (http://www.hector.ac.uk/cse/distributedcse/reports/GROMACS/).

Annual Christmas Meeting 2014

On Monday 1st December, we had our annual Christmas meeting at the University of Strathclyde in Glasgow. This event is always interesting and fun, bringing together the groups from all four institutions (the universities of Strathclyde, Edinburgh, and Warwick, as well as Daresbury Laboratory).

The afternoon was filled with impressive presentations from a selection of researchers. The wide range of work going on across the four institutions is exciting –  spanning multiphase flows, complex fluids, electro-osmotic flows, and more. With the research continuing to grow, the future is looking bright for the partnership!

The presentations were followed by a Christmas dinner at The Restaurant in Princes Square (thanks to Dr Konstantinos Ritos and Dr Craig White for organising this). Continuing a group tradition, entertainment was provided in the form of a festive fluids poem from Dr Duncan Lockerby! The night then finished at a lively little Brazilian bar with some cocktails and dancing (for some). This was a very nice way to start the festive period!

Here are a few photos from the dinner...

 

 

Presentation at NASA, Brown University and at the APS 67th DFD Meeting in San Francisco

I took part in the Society's Division of Fluid Dynamics (APS-DFD) Meeting held in San Francisco, CA from November 23-25, 2014. The conference was hosted by Stanford University, UC Berkeley and Santa Clara University. The scientific program included award lectures, invited talks and mini symposia, many parallel sessions with contributed papers, and poster presentation. I gave a talk 'Improved Proper Orthogonal Decomposition for Noise Reduction in Particle Flow Simulations' on the last day of the conference.

On 26th of November, I went to NASA Ames Research Centre to give a presentation titled 'An Evaluation of Popular Noise Reduction Techniques for Multi-scale Problems Involving Particle Flow Simulations'.

At NASA.

Fig: At NASA.

Afterwards, I flew to Boston to meet Prof. George Karniadakis and his group at Brown University. On 28th of November I gave there a talk on 'Comparison of Algorithms for Solving Statistical Inverse Problems in the Framework of Particle Fluid Simulations'.

Gallery of Fluid Motion

The winners of this year's Gallery of Fluid Motion prize can be seen here.

If you want to be inspired by Fluid Mechanics research, I strongly recommend you take a few minutes to watch some of the winning videos.

FADE insertion of bucky-balls in water using Molecular Dynamics

 

 

 

 

 

 

 

 

Conferences for 2015

Elecro-osmotic flows...what are they? A VERY brief intro

So my current research is concerned with simulating electro-osmotic flows (EOF), especially in really small diameter pipes and channels. I thought I would give a brief introduction to EOF, as unlike using pressure to drive a fluid through a pipe for example, EOF are very different. Instead, an electric field is used to drive the fluid.

However, simply applying an electric field at the ends of your pipe/channel will not guarantee an EOF... as a special structure, called the electric double layer, must first form at the interface between the fluid and substrate wall, shown in the following figure:

The green and red circles represent ions in the fluid, and the substrate wall is shown in grey. Once this layer is present, applying an electric field will give rise to an EOF, as seen in the next figure:

The difficulty in studying these systems stems from the ability to correctly model the interactions between the fluid-substrate interface. In order to do this effectively, the fluid can be treated discretely, which means that each atom/molecule of the fluid is to be represented. This is in contrast to CFD methods, where the fluid is treated as a whole continuous body. Unfortunately, this means that CFD methods cannot accurately capture the interactions occuring at the interface, which, as systems get smaller, start to dominate system behaviour.

So I use molecular dynamics (MD), which does treat the fluid discretely, to study fluid flow under electric fields! However, the drawbacks are that, MD simulations are generally much more computationally expensive than CFD, and are also limited to small spatial and temporal scales.

I hope that this brief introduction has helped in understanding what EOF is.

Relating our work to automatic music generation

For a bit of fun, I related our molecular dynamics work to automatic music generation. Since I first posted this on youtube (in June) it has been viewed 2000 times, and the algorithm has been coded/implemented by at least four different people in different languages (one is a plugin for Reaktor that has now been downloaded 500+ times). It just goes to show --  the impact of our work is often difficult to predict!

Presentation at ICNMM 2014

This is a video of the talk I gave at the 12th International Conference on Nanochannels, Microchannels, and Minichannels (ICNMM2014), Chicago, (5th Aug 2014)

ICNMM 2014

This week, I am attending my first international conference - the International conference on nanochannels, microchannels, and minichannels (ICNMM), in Chicago - along with Dr. Duncan Lockerby.

Today, I presented our recent work on a multiscale method for the efficient simulation of nanofluidic networks of arbitrary complexity (you can read the whole paper here), which got a very positive reception. So, tonight we'll be celebrating this occasion by experiencing a bit more of Chicago, which so far has included a lot of craft ale, a smashing skyline, a giant mirrored bean,

and for some reason bacon-and-cheese-imbued cocktails.

Attended WCCM XI

I attended my first conference this week, which was the 11th World Congress on Computational Mechanics (WCCM XI) in Barcelona, Spain. The conference was a week long and was attended by over 3,000 people!

I gave my presentation on the first day of the conference to a packed crowd. My work on molecular dynamics pre-simulations for nano fluidic computational fluid dynamics went down well and there was plenty of insightful questions afterwards. Having given my presentation early on, I was able to spend the rest of the week enjoying other peoples talks and exploring Barcelona. One such talk,  by Henrik Rusche, was very interesting to the work our research group, he described a hybrid continuum-particle solver developed within OpenFOAM, a paper on this can be found here and they even cite some of the work done by our research group!

Here is a picture of me at the congress banquet with some of the entertainment.

Naked Scientists

I did an interview on BBC Radio’s “Naked Scientists” programme about our research using supercomputers. You’ll be pleased to know that no clothes needed to be removed in The programme was to mark the inauguration of ARCHER - the UK national supercomputer that is based in Edinburgh that I hope we will be using for some of our simulations shortly… Click here for a transcript of the interview

Opportunities and Challenges in Non-Continuum and Multiscale Fluid Dynamics

Delegates - Opportunities and Challenges in Non-Continuum

As outreach to the academic and industrial community, we organised an open 2-day colloquium in December 2013 on the topic “Opportunities and Challenges in Non-Continuum and Multiscale Fluid Dynamics”. The event attracted over 70 participants with a strong representation from industry, including talks from Steering and Impact Committee members. The first day was composed of a series of keynote talk. The second was an ‘unconference’, with the agenda set dynamically by the delegates.

Space engine!

Here's an interesting application that I'd like to share: Knudsen compressors as propulsion devices. The Knudsen compressor is a solid state pump (free of working fluids or lubricants) that operates due to thermal transpiration, which is a rarefied gas effect. The results of thermal transpiration are most easily illustrated by considering a simple Knudsen compressor configuration: a channel with solid walls connecting two reservoirs in the presence of a temperature gradient.

So that molecule-molecule collisions are not significant and molecule-surface collisions dominate, the characteristic length of the channel is smaller than the molecular mean free path of the gas. Thermal transpiration is then observed as a flow of gas along the channel's inner surfaces from cold to hot regions (black arrows in the diagram above). Note the opposing pressure-driven flow (white arrow in the diagram above) in the central portion of the channel that develops as the pressure in the hot reservoir increases.

Since they facilitate the compression or transport of a gas, Knudsen compressors may be used for the propulsion of small spacecraft in low Earth orbit. In such a rarefied environment, the molecular mean free path is large. Temperature variations are also large; without thermal controls, the temperature of the Sun-facing side of the International Space Station would hit ~390 K, while thermometers on the dark side would plunge to ~120 K. As harsh as these conditions may seem to us they are in fact favourable for the efficient operation of Knudsen compressors.

April Workshop

On the 29th-30th of April, we had a workshop in Ambleside (Lake District) funded by the EPSRC programme grant and the EPSRC’s Creativity@home initiative. The first day was a standard workshop and update session, providing a concise and focussed roundup of technical progress made in the Programme. The second day was led by Spela, Alessio and Matthew, with the aim of developing ideas for new and interesting opportunities. Some of these ideas were developed in break-out sessions with recommendations made for future work. To clear our heads at the end of the first day, and to set the tone for the more creative agenda of the following day, we went for a `relaxing’ stroll.

Here are some pictures:



Matthew Borg delivers his paper “Fluid flows in nano/micro network configurations”

...at the American Physical Society, 65th Annual Fall DFD Meeting in San Diego:

Advanced Nanomaterials Manufacturing: Fluid Flow Assisted Control of Nucleation and Self-Assembly Processes

Konstantinos has been working for 3 months on a mini-project, sponsored by the Strathclyde University funding mechanism called “Bridging the Gap”.

The £16k mini-project, headed by Monica Oliveira,  is called Advanced Nanomaterials Manufacturing: Fluid Flow Assisted Control of Nucleation and Self-Assembly Processes, and also involved experimental researchers from Strathclyde’s Chemical & Process Engineering Department. Its aim was to construct a fluidic platform with interchangeable geometries to generate and control different flow kinematics in order to analyse the effect of fluid dynamics on nucleation.

The numerical part of the work was computer simulations using and extending our existing molecular dynamics (MD) code in order to obtain deeper insight into the fundamental phenomena.

After the first meeting with the involved partners, we decided to develop a new Brownian Dynamics (BD) code using OpenFOAM. The main reason for this is the time and length scales involved in the experiments: MD would only be able to simulate a nanoscale system with around 10 nano-particles and thousands of water molecules, or thousands of nano-particles in vacuum, for problem times up to only a few microseconds, in the time available for the mini-project. Our new BD code enables us to simulate thousands of nano-particles, with sizes (R = 25 nm, simulation box length = 4 micrometres on 2 CPU cores) far exceeding what any MD code can achieve, and for timescales of the order of milliseconds. We also implicitly include the presence of water through the diffusion tensor.

If you want a copy of this code, please just email Konstantinos. The code has been successfully compared with results given in the book “Introduction to Practice of Molecular Simulation” (by Akira Satoh) for a Lennard-Jones fluid. Introducing a mixed potential leads to the nucleation of the nano-particles. This potential can also be used in MD simulations and can be easily altered. Konstantinos has been performing equilibrium simulations as well as other cases with a body force or a shear stress, and is currently investigating the effect of imposing a flow on the cluster distribution or the clustering rate. These results, along with the information about the flow, could be a guide for future experiments and the design of micro-devices.

Molecular Engineering: technology 'down the rabbit hole'

On the evening of Tuesday 26 February, I gave a talk entitled "Molecular Engineering: technology 'down the rabbit hole'" to the Kilmarnock Engineering & Science Society. This is a popular local group that holds regular evening talks on topical issues in science and engineering. The typical audience is a broad range of the public, from school students through to retired people. Jason's talk included discussion of radiometers, Knudsen pumps, multiscale flows, supercomputing, desalination using nanotubes, traffic, and space vehicle aerodynamics. He presented results and videos generated from our research groups. There was a great deal of interest from the audience, and plenty of good questions at the end.

My presentation at MNF2011

Here is a video of the presentation I gave at the 3rd Micro & Nano Flows Conference (MNF2011) (Thessaloniki, Greece)

Latest News

Recent Publications

R Pillai, JD Berry, DJE Harvie, MR Davidson (2017) Electrophoretically mediated partial coalescence of a charged microdropChemical Engineering Science, 169: 273-283. (access here)

JF Xie, BY Cao (2017) Fast nanofluidics by travelling surface wavesMicrofluidics and Nanofluidics, 21: 111 (access here)

AP Gaylard, A Kabanovs, J Jilesen, K Kirwan, DA Lockerby (2017) Simulation of rear surface contamination for a simple bluff bodyJournal of Wind Engineering and Industrial Aerodynamics, 165: 13-22. (full paper here)