Okay, first blog post ever. I’m going to keep it quite light as I have only just started in the group so am not going to write about any thing micro/nano `fluidsey’ but rather talk about something recently that I found interesting. Some of you will probably know all of this stuff already but for me it was new and surprising so I’m not going to apologise for trying to “teach your grandmother to suck eggs”. Plus I get to make some pretty pictures of chaotic phase portraits which is always nice!
How do we decide if something is chaotic in nonlinear systems?
Before joining this group I was working on the classical fluid dynamics problem of an air bubble propagating in a Hele-Shaw channel (https://www.youtube.com/watch?v=lTOG0KTBM6k). The governing equations of the system are a highly nonlinear set of PDE. When investigating the dynamics of the system we found some very interesting trajectories that when you plot them on a graph they look ‘chaotic’. Whilst it is tempting to look at the time-signal of a system and say it is chaotic just by sight because it was a pretty shape (and then impress your friends with it because chaos theory is cool and trendy) we actually have to be a bit more rigorous in what we mean by ‘chaos’. That led me to find a more concrete way to determine whether we were seeing genuine chaos or rather something complex, but not actually chaotic.
The tried and trusted source Wikipedia https://en.wikipedia.org/wiki/Chaos_theory states three criteria for a system displaying chaos: 1) it must be sensitive to initial conditions, 2) it must be topologically transitive and 3) it must have dense periodic orbits. In this blog I am going to concentrate on the first criteria which is basically better known as the “butterfly effect”. This can be explored in a numerical simulation by the calculation of Lyapunov exponents (LP). You may heard of LP and what they mean but this was the first time that I had to actually calculate them and understand them properly. If we define a continuous time dynamical system as
where u are the state variables, t is time and u0 represents the initial conditions. The LP, lambda, are calculated by
where J(f) represents the Jacobian of the system. In essence they describe the average rate of separation of trajectories in a neighbourhood of a given initial condition, i.e. how a unit ball surrounding u0 evolves in time. For an N dimensional system there will be N exponents and the LP spectrum gives a signature on what type of attractor the system tends towards . If all are negative then the system has evolved to a stable fixed point. If one is zero then the system evolves to a stable periodic orbit but if two are zero then it is a stable invariant tori. If one is positive and one zero then this indicates that nearby trajectories separate from each other exponentially and is an indication that chaos is present (note that the other two criteria have to be satisfied as well) and the dynamics of the system evolve to a strange attractor (see top image for an example of a strange attractor in the Rossler system). The actual calculation involved uses one of my favourite undergraduate theories namely the Gram-Schmidt procedure for the orthogonalisation of a set of a basis vectors, and it pleased me (saying that is weird isn’t it) that I finally had a reason to use this in my research where previously I just orthogonalised basis vectors just as a hobby or if I got bored on the train.
So before attempting to calculate the LP of the bubble problem I decided it was prudent to benchmark my algorithm against the well known Lorenz equations (plus it was an excuse to plot some pretty `butterfly’ figures). The equations are stated below:
and depend on three parameters sigma, r, and b. For my simulations I choose sigma = 16, r = 45.92, b=4.0, (for some reason these are the values that are referred to in the literature) and with initial conditions u = (0,1,0) and time step dt=0.005. These equations are nonlinear and the most obvious way to solve this is our old friend the explicit Runge-Kutta 4 method (RK4). It’s easy to do this and we get the butterfly strange attractor plot showing the phase portrait in the (u1,u3) plane:
Although I’m not going to win the prize for the ‘Best Butterfly Strange Attractor’ plot of the year with this I did manage to calculate the LP spectrum and found it to be (+1.515,-0.002,-22.513). As there’s a positive number in the spectrum the system is chaotic (which we knew anyway); so far so good. These values agreed with the values in the literature so I patted myself on the back and moved on. Now, actually the RK4 method, whilst easy to implement and is very fast, isn’t appropriate for ‘stiff’ systems. We have to choose a small time-step in order to avoid numerical instability. For the bubble PDE I was working on and many other systems it is necessary to use an implicit method so that we can choose a larger time step to avoid numerical instability. The simplest is the Backwards Euler method (BDF). This method is computationally more expensive as we have to solve a linear system at each time step but the pay-off is that we can make our time step larger. So I had to try and calculate the LP using this method which was slightly trickier and so I would still benchmark it with the Lorenz equations.
After some huffing and puffing I finally managed to do it, pressed enter on my keyboard and solved the equations using exactly the same parameters as for the RK4 algorithm including the time step, dt=0.005. I sat back and relaxed for about two seconds and the computer spat out my LP spectrum. I wasn’t expecting the exact same numbers as the RK4 method but I wanted to be in the same ball park. To my surprise the computer gave me a spectrum of (-0.279,-0.281,-20.65); all negative. Whoops, I must have done something wrong; they were all negative and hence at a stable fixed point of the system. Before I went back to my code to swear at it a little longer (that’s how successful code gets written, no?) I plotted the same phase portrait as before:
Now I was really confused. The trajectory has spiralled into a fixed point of the system which we know is actually unstable. In fact this fixed point is the centre of one of the ‘wings’ of the butterfly plot. So actually the LP spectrum I calculated was fair enough because the system has evolved to a stable fixed point and hence they should be all negative. So what was going wrong?
Well it turns out that some implicit numerical methods suppress chaos in that unstable fixed points artificially become stable. I’m not going to go into the theory here (classic phrase for when you don’t fully understand something but in this case it’s because its late on Friday and I feel I’ve all ready bored you enough) but there’s a really nice explanation of this in this article here https://www.sciencedirect.com/science/article/abs/pii/037596019190404V. The way to avoid the artificial stable fixed points using the implicit method is to choose a sufficiently small time step for the algorithm. But hang on, one of the main advantages of using an implicit method is that large time steps can be chosen. So I realised I was stuck in a numerical methods Catch 22. I have to use an implicit method so large time steps can be taken but it may produce artificial stable fixed points unless I choose my time step small enough.
Anyway whilst this was annoying for going forward with my problem at least I had learnt something and it was fun to code the LP calculation using the G-S procedure. Finally to appease my curiosity I chose a smaller time step, t=0.001 for the same parameters and initial conditions using the BDF algorithm and phew I recovered the butterfly plot:
and got a positive LP spectrum as expected; as they say in Norfolk where I’m currently isolating, ‘lovely ole’ job’.
The moral of the story
So the moral of this story, if there is one, is never underestimate the importance of choosing a time step for any numerical algorithm (stating the flipping obvious I know). Even with an implicit BDF scheme where one can be a bit blasé about the size of dt there are still some delicate issues involved and one shouldn’t assume ‘everything is okay’ because it is an implicit method.
For anyone interested in a technical discussion of dynamical systems and bifurcation theory I really recommend the book “Elements of Applied Bifurcation Theory” by Kuznetsov. It’s technical but has some really good explanations and examples.