Saturday 9 January 2016

El Niño: The Good, The Bad, and The Ugly

This blog started off considering how ocean temperatures have been rising, specifically since the start of the Industrial Revolution, when El Niño started hitting the news headlines. El Niño led the direction of this blog, as it is something we are all experiencing and can all relate to, no matter your previously knowledge. Together we’ve learnt about just what El Niño is, and considered the forecasts made for the winter of 2015 and into the spring of 2016. We’ve looked at El Niño in relation to global warming, considering both the cases that El Niño is causing global warming (don’t worry, we quickly put that theory to rest!), and the effect that global warming is having on El Niño events. We looked at the different types of models used in relation to ENSO, specifically statistical and dynamical forms, and discussed the advantages and disadvantages of each of them. We’ve looked at how you can contribute to the bigger picture, and why it is that sometimes models just don’t quite get it right. If you’ve missed any of this, catch up with my previous posts!

So, what is next for the current ‘monster’ El Niño?

The latest satellite images have been released from NASA showing that the current El Niño event is showing no signs of weakening just yet...

Source: Jet Propulsion Lab, NASA.

Since I last posted the NASA satellite images from 16th October 2015, the area of unusually high sea surface temperatures across the Pacific Ocean has increased. As can be seen in the latest images from 27th December 2015 above, there is a large expanse of red-white spreading from the Chilean coast up to the north of the Mexico’s west coast, and reaching far into the centre of the Pacific. As you may remember from an earlier post, these images show sea surface height anomalies, which are highly correlated to temperatures and the heat stored within the ocean below. The areas of white highlight sea surface that is between 15 and 25 centimetres above normal.  Conversely, the region of blue-purple showing sea levels up to 25 centimetres below average has decreased since my October post.  

The movie below shows the evolution of sea surface height anomalies since the beginning of 2015 alongside the corresponding ones from 1997, when the previous immense El Niño was felt around the world. 


The 1997 El Niño was recorded by the NASA/Centre National d'Etudes Spatiales (CNES) Topex/Poseidon mission, whilst the current event is captured by the Jason-2 satellite. It can be seen that there is a remarkable likeness between the two, which both display the typical development of an intense El Niño. Given an El Niño on the same scale as that of 1997/98, what does this mean for the planet and society?


El Niño: The Good

I stumbled across this really interesting project that one of our own UCL Geographers (PhD student David Seddon), supervised by Professor Richard Taylor, is currently researching. A small wellfield in a semi-arid basin called Makutapora in Tanzania, supplies the countries capital city, Dodoma, with safe water. Replenishment of this vital source of freshwater for hundreds of thousands of people  has shown to occur in conjunction with El Niño events. The immense El Niño that we are currently experiencing will no doubt have an effect on these resources, and the GroFutures team have set up instruments to monitor it. This blog has largely focussed on the negative impacts of El Niño, but this project highlights that episodes are also of crucial importance in some parts of the world that desperately need groundwater levels replenished. 


El Niño: The Bad

A recent study by Chretien et al., (2015) shows that during El Niño events the effects not only impact on the environment, as we’ve seen by flooding and drought, but this incidentally impacts on global health by possibly increasing the spread of infectious disease. Malaria, chikungunya, and dengue are all very nasty illnesses that you do not want to catch! They are spread by mosquitoes, which thrive under the wet conditions El Niño brings to South and Central America, and parts of the US. Conversely, areas which feel the warm, dry impacts of El Niño may see increases in the transmission of cholera, and other infectious diseases. Powerful El Niños also have a long-term impact on coral reefs. A 17 year study of coral reefs in BahiaBrazil, found that for two years after the 1997/98 El Niño there was severe coral bleaching and a significant reduction in size. In other areas of the world including the Indian Ocean, this was significantly worse. It took 13 years for the coral in Bahia to fully recovery. 


El Niño: The Ugly

The shift in temperature and precipitation levels that coincide with El Niño events impacts on the global yields of major crops. This undoubtedly has knock-on effects to food prices. Lizumi et al., (2014) considers the global effect of ENSO on crop yield, and finds that El Niño results in maize, rice and wheat harvest changing between −4.3% and +0.8%. However, this is not the case for soybean yield, which actually increases by 2.1% to 5.4% under El Niño conditions. Lizumi et al., (2014) expects that the global demand for these crops will increase by 100–110% by 2050 from that in 2005. Close monitoring of the ENSO cycle is crucial in minimizing the negative impacts and maximizing the positive impacts that El Niño and La El Niña have on crops. Cashin et al., (2015) consider not only the countries that are directly affected by El Niño, but also the indirect macroeconomic effects filtered through third-markets. 21 country/region-specific models are analysed, from 1972 to 2013, to see how growth, inflation, energy and non-fuel commodity prices differ under El Niño conditions. The study finds the majority of countries analysed experience energy and non-fuel commodity price increases in the short term, however the EU and US actually see a growth effect! Unfortunately this isn’t the case for Australia, Chile, Indonesia, India, Japan, New Zealand and South Africa who see a downfall in economic activity.


Whether ‘The Good’, ‘The Bad’, or ‘The Ugly’, modelling can be used as a tool for preparing for these situations, either by harnessing the conditions that El Niño brings, or, protecting against it.


What is next for Modelling?

A report published by Rädel et al., only 5 days ago, used the Earth system model, MPI-ESM-LR, to analyse the role of clouds in El Niño. The study found that atmospheric circulation is highly affected by cloud processes, and this contributes to more than half the intensity of the El Niño. Climate models that didn’t take the interaction of clouds with atmosphere circulation into account would therefore predict a weaker El Niño than those which did consider the interaction. Back in 2012, Maslin and Austin discussed whether climate models were already at their limit. They argued that the complex climate models in use are likely to produce predictions with more uncertainty due to the increased complex factors such as interactive carbon cycles, and the role of aerosols in atmospheric chemistry now included within the models. In relation they express the importance of the public and policymakers being aware that “climate models may have reached their limit”. Whilst I agree that the public and policymakers should have an understanding that a model is not an exact reproduction of the reality, hence has its limitations and will not always predict exactly as reality turns out to be, I fear that a statement worded in such a way will in fact decrease public support of scientific modelling, rather than the intended increase of support. We should not stop adding further details to current models for fear of them performing worse. As the study by Rädel et al., (2016) found, sometimes complex processes do need to be included within models. As our understanding and technology advances so will the certainty of these models. Models of all complexity have a role to play; as Knutti (2010) succinctly put it, “we learn from the diversity of models, and we will learn from different ways to evaluate and combine them”.



Sunday 3 January 2016

Good Model Gone Bad


So there’s no denying that one of my earlier posts ‘Advanced Warning – get shovels and sledges at the ready’ was slightly off the mark when it comes to the weather in the UK this winter so far. Since October I’ve been ready to get my favourite fluffy hat, scarf and mittens out to wear, but they just haven’t been needed. More like raincoats and wellies, than shovels and sledges.

Published by NOAA in December 2015, the months of January to November of 2015 were globally the warmest on record for such a period in 136 years (see Table 1 below). This record-breaker goes across both land and oceans. The average global land surface temperature for the first 11 months of 2015 was at 1.27°C above average in the 136 year period of record. Similarly, the oceans saw an average global sea surface temperature at 0.72°C above average. Finally, the Earth’s land & oceans combined temperature was at 0.87°C above the 20th century average of 14.0°C. For the whole of 2015 not to become the warmest year on record since 1879, the December global temperature would have to be at least 0.81°C below average. If temperatures we’ve been seeing in the UK are anything to go by, this is very unlikely to say the least!


Table 1: Global Temperature Anomalies
Source: NOAA National Centers for Environmental Information

These record high temperatures were most prominent in the majority of land in South America, as well as much of the Earth’s oceans, including; the eastern & central Pacific Ocean, a large proportion of the central western Atlantic, and most of the Indian Ocean. However, there were areas of the oceans with record low temperatures, specifically the southern tip of South America, as well as an area in the Atlantic Ocean, both experiencing lower than average sea surface temperatures. These land and ocean temperatures, as previously determined, are undeniably linked the El Niño weather phenomenon. Perhaps unsurprisingly, El Niño has been hitting the headlines in a big way again over the past few weeks.

What with record high temperatures, combined with extreme precipitation, (there was more than 200% of the average rainfall for November in much of south-west Scotland, north-west England and north Wales), the result has been that of Storm Frank. Storm Frank has devastated many areas of the UK by causing them to experience severe flooding. The floods have resulted in thousands of homes to be flooded, people having to be evacuated from buildings, washing away roads and bridges, and causing potentially greater than 5 billion pounds worth of damage.


PRENSA MUNICIPIO/AFP/GETTY IMAGES

Source: The Uruguay River in Concordia, Daily News


Without undermining the severity and importance of the unfortunate situation these thousands of people are going through in the UK, over in South America in excess of 150,000 thousand people have had to flee their homes due to intense flooding. El Nino has caused extreme rainfall resulting in 3 major rivers, the Paraguay, the Uruguay and the Quarai, to swell and break their banks. The 150,000+ thousand people who have been directly affected from the flooding come from Paraguay, Argentina, Uruguay, and Brazil. Four countries which cover a much larger area than the UK, puts the scale of this disaster into perspective.

It seems that predictions of a cold, snowy winter have been met by that of a mild, wet one. So why do models sometimes ‘go wrong’ and make predictions which are far from what the situation turns out to be in reality?

Essentially, because that is exactly what the model results are – predictions. A prediction is something that is likely to occur in the future given data or knowledge of prior outcomes of the system. There is always a degree of uncertainty within model parameters and the forecasts they produce. When it comes to meteorological models, for example, forecasts are made in iterative time steps (as we all know, we generally trust a weather report for today compared to one for two weeks time). So why is this the case? A tiny error in an initial state input, extrapolates up to be a much larger error at a later time point prediction. Modelling is a continuing development. Whether it comes to more recent data being collected and available for use; advances in technology allowing better or different types of data to be recorded; or, even new insights into a system; these result in models being updated or rebuilt to include the additional information.


Source: Science or Not?

The more that is learnt about a complex physical process, the more detail that is available to be included in the model build, thus potentially more chance of errors. These errors can come from either the assumptions made, or even the actual coding of the model build itself. Models must be falsifiable to test if the model produces results in line with that of the real-world situation. However, physically based models derived from scientifically sound physical principles are often found to not be consistent with observations (Beven, 2002). Even if a model is found to be falsified, this can provide areas in which the model needs to be improved, or an insight to into mistakes in previous assumptions made.

Whenever modelling these issues need to be kept in mind, as the mathematician George Box said: ‘essentially, all models are wrong, but some are useful’.





Thursday 24 December 2015

Statistical or Dynamical? Which Model To Choose

In an earlier post ‘Model This and Model That’, we looked at some of the models used in the modelling and prediction of ENSO, collated and plotted by the IRI/CPC. We learnt a little about what statistical models and what dynamical models are, and touched on some of the reasons for these different types of models. The modeller's knowledge and experience, and the basis of what the model is for, all contribute to the type of model that is built. Another large factor in deciding what type of model to build comes down to resources. These resources include man power, computer power, time and availability of parameter data. For example, temperature is the most widely available meteorological data, as well as having the longest record of observations, whereas solar radiation is harder to obtain.

You may have noticed that there are more dynamical than statistical models used in the IRI/CPC plots. To be precise, there are 17 dynamical, and 9 statistical models used. So what does this say..? Is it as simple as dynamical models are better than statistical ones because there are more? Here are some of the advantages and disadvantages of both statistical and dynamical models to give you more of an understanding in why one would be built and used over the over.

Statistical


Advantages:
  • Models are relatively quick and simple to build and run
  • Little to no knowledge of underlying physical principles is required
  • Simple analytical methods allows fairly easy model inversion

Disadvantages: 
  • Model incorporates many parameter assumptions under certain observation conditions, meaning confidence of extrapolation is hard to justify
  • The model does not enable further understanding of the physical processes


Dynamical


Advantages:
  • Can be applied to a wide range of conditions
  • Incorporates great complexity of processes via the use of numerical solutions
  • Increases understanding of physical processes

Disadvantages: 
  • Needs powerful computers to run complex models, which still can take long time to run
  • All relevant processes  and corresponding variables need to be accounted for in the model
  • Complicated to invert model due to difficulty in obtaining analytical solutions

Over the past few decades, a large proportion of models have improved in their ability to predict El Niño episodes (Guilyardi et al., 2012). These improvements are down to a combination of reasons including more advanced technology to observe and record data at a higher spatial resolution. Such improvements ultimately lead to more understanding of the physical processes of ENSO, which also enables improvements to the models to be made. It is thought that the reason behind those who haven’t significantly improved their skill is due to the additional processes that the models are now simulating. Such processes include; the carbon cycle; ecosystems; the indirect effect of aerosols; and, the interaction between stratosphere and troposphere (Guilyardi et al., 2012). However, the short term consequence of these additional processes and model complications not initially adding anything to the model, gives potential areas to explore and improve understanding of in the near future. This is where the conflict between over simplifying or over complicating a model comes into play. Simple models can be very powerful tools, but sometimes maybe they are just missing the mark when it comes to usefulness. Statistical models are usually simpler than dynamical models, as we found in the advantages and disadvantages of the model types above.

In the early 1990’s the statistical and dynamical models used for the study of ENSO showed comparable skills (Barnston et al., 1994). This was reinforced later in the 90’s by model predictions of the exceptionally strong El Niño in 1997/98. The forecasts from twelve statistical and dynamical models were studied, with results concluding that skills were again of similar levels (Landsea and Knaff 2000). None of the dynamical models conclusively performed better than the El Niño–Southern Oscillation Climatology and Persistence (ENSO–CLIPER) model, a simple statistical model, which was used as the baseline for comparison of model skill levels. Hence, it could be argued that statistical models where the preferred type due to the ease and lower associated cost of development.

A more recent study of the statistical and dynamical models used in the IRI/CPC plots from 2002 to 2011 was undertaken, with 8 statistical and 12 dynamical. The study found that despite only analysing the models over a short period of time (9 years), the skill of dynamical models have now exceeded those of statistical models, specifically for months March to May when shifts in ENSO are most likely, therefore making predictions most difficult (Barnston et al., 2012). Yet, it is also acknowledged that the short period of time in which the models were studied means that it is hard to prove the findings statistically robust, but within its limited time frame intriguing results nonetheless. Dynamical models have received greater funding than statistical ones over the past few decades, meaning that the majority of statistical models analysed have not been drastically altered in many years. This may play a part in why the dynamical models have shown to have better predictive power than their statistical equivalents. However, dynamical models are proving the potential they hold due to their capability of modelling the non linearity and rapid change of state of ENSO (Barnston et al., 2012). Without denying the value of statistical models, it seems that dynamical models will be at the forefront of modelling El Niño, especially with the continuing development in technology meaning the power of computers increase, and the associated costs decrease.

So I’ll finish by going off topic, but to wish you a very Merry Christmas! Without even doing any scientific analysis, I can (unfortunately) say that to a 5% significance level there is sufficient evidence to reject the hypothesis that it will snow on Christmas day of 2015 (if you’re in England that is). 

Source: Buzzfeed

Here’s hoping you didn’t place a bet that I advertised near the beginning of this blog... however if you did, remember, I did warn you that I wasn’t to blame if you lost! Enjoy the festivities wherever you may be, and whatever you may do, and we’ll catch up again in the New Year!



Saturday 12 December 2015

Over To You

In the spirit of COP21 some of my fellow Environmental Modellers have been suggesting ways that we can, as individuals, contribute to the ethos of green living – Any Earth Left has been giving some great ways how we can make a positive impact on reducing our emissions as a consumer, and The Global Hot Potato has been providing some yummy environmentally friendly recipes to whet your appetite.

So, how else can I play my part I hear you ask?

Well, a team of climate scientists over at the University of Oxford have developed a novel way that you can contribute to the challenge we face of climate change. Albeit not by reducing emissions, and without even leaving the comfort of your home. Climateprediction.net is ‘the world’s largest climate modelling experiment for the 21st century’, which boasts a community of volunteers to run climate models from their home computers, via computing platform BOINC (The Berkeley Open Infrastructure for Network Computing).

As I’ve mentioned in earlier posts, the importance of modelling the environment is paramount to understanding, learning, and predicting, what has, is, and will happen in the future given certain conditions. When this comes to the climate, we have to do so in large scale, hence, the number of climate models to be run is vast. Vast enough, that even supercomputers struggle. Instead, teams in the Environmental Change Institute, the Oxford e-Research Centre, and the Atmospheric, Oceanic and Planetary Physics, departments at University of Oxford have adopted a technique called ensemble modelling which means thousands of people each run a tiny part of the climate model on their personal computers (you don’t need anything fancy, but there are a few system requirements), and then send back the results for them to be interpreted. The site assures volunteers that the greenhouse gas emission generated by leaving your computer running for longer than you might otherwise do is very small.

There are some really interesting projects currently underway including:

  • Assessing the risk of Atlantic meridional overturning circulation (AMOC) collapse in the coming century (see RAPID-RAPIT)

  • Investigating the response of rainfall, evaporation, and river run off, to changes in land use and the carbon cycle (see HYDRA)

  • What the impact of stratospheric aerosol particles and solar radiation management would be (see Geoengineering)

Over the past few months, in certain lectures, it’s often been said that even Climate Change students don’t see themselves as modellers. With this facility from the University of Oxford, now everyone can call themselves modellers, and I think even more impressively, climate modellers. 


Tuesday 8 December 2015

Model This and Model That

Continuing from my previous post where we looked at ENSO forecasts into the spring of 2016, we shall now look at some of the different models used. First though, what is a model? Succinctly put, a model is a simplified version of a complex reality. Ok, good start, but let’s go deeper within this field and try to learn what statistical models, and dynamical models, are.

                                                                                  

Statistical Models


A statistical model allows us to infer things about a process from its observed data, and is represented through a set of variables. The model is constructed through 1 dependent variable, and at least 1 independent variable. The dependent variable (aka the response/outcome), is what is being studied and is the result of a combination of the independent (aka explanatory) variables. In a simplified case, this may mean that each independent variable has no relationship with any of the other independent variables.  Or, is more often the case, there are relationships between them, and these interactions are statistically known as correlations. Both dependent and independent variables can be observed and recorded, and what with a set of assumptions about the data, parameters (the unknown constants in the equation) can be estimated. An important part of a statistical model to be aware of is that it is non-deterministic. What this means is that some of the variables are stochastic, i.e. essentially random. Hence, a statistical model uses random variables to model the components of the process that are not currently fully understood scientifically.

Let’s take a look at 3 of the statistical models used by the IRI as shown in my previous post:

CPC MRKOV – Is the National Centres for Environmental Prediction/Climate Prediction Centre (NCEP/CPC) Markov Model. It is a linear statistical model with three multivariate empirical orthogonal functions (EOF), of observed sea surface temperature, surface wind stress, and sea level. EOF analysis is the decomposition of a signal in terms of both temporal and spatial patterns. 

CPC CCA – Is the Climate Prediction Centre Canonical Correction Analysis Model. It is a multivariate linear statistical model with predictors mean sea level pressure and sea surface temperature. Canonical correction analysis is a method to find the maximum correlation of a linear relationship between two multidimensional variables.

FSU REGR – Is the Florida State University Regression Statistical Model. It is a multiple linear regression model with predictor variables of upper ocean heat content, wind stress, and sea surface temperatures. 


Dynamical Models


A dynamical model, as with a statistical model, represents the relationship between a set of variables, however it uses functions to explain how the variables change over time given their current state. These functions which are related to their derivatives are known as differential equations, and can be thought of as the rate of change.

An extremely famous example of a non-linear dynamical model is the Lorenz model. Edward Lorenz took the Navier-Stokes (fluid dynamics) equations and simplified them to get 3 differential equations, with only 3 variables (I won’t go into the equations now, but take a look here if you wish to learn more about them). However, the point is they seemingly look very simple to solve, but plotting the solutions in 3D leads to an image such as the following:



If you were to place your finger at any point on the line in the above plot, and wanted to move say 1 position to the left (i.e. only 1 of your 3 variables in the equations change by 1 unit) it would take a lot longer than expected. You can’t jump over the white space between the lines; you have to trace your finger round the line until you reach your ‘destination’. Now, you can see that this is going to take a lot longer than you would have thought! This is a system of chaotic behaviour, and is commonly known as the butterfly effect. A small change in one variable can result in massive changes in a later state.

Now let’s take a look at 3 of the dynamical models used by the IRI as shown in my previous post:

NASA GMAO – Is the NASA Global Modelling and Assimilation Office, Goddard Earth Observing System Model (GEOS-5). It is constructed from an atmospheric model, catchment land surface model, and an ocean model, which are all coupled together by the use of the Earth System Modelling Framework.

UKMO – Is the UK Met Office General Circulation Dynamical Model. It is a coupled ocean-atmosphere GCM known as the GloSea (Global Seasonal) model. It is comprised of 3 models - an atmosphere, an ocean and a land surface model.

LDEO – Is the Lamont-Doherty Earth Observatory Model. It is an improved version of the original simple coupled ocean-atmosphere dynamical model by Zebiak and Cane, 1987.


Statistical-Dynamical Models


There also exist statistical-dynamical models, which, as is in the name, use a combination of both statistical and dynamical methods for different components. They usually take the statistical approach for parameters such as wind speed and direction, whilst using a dynamical method for modelling Newton’s Laws of Motion for energy diffusion.


Why all the models?


You may wonder why there are so many different approaches to modelling ENSO, and even within the different approaches, why so many different models exist. Well, as can be seen from looking at just a few of the different types of models above, there are numerous combinations of predictor variables are used, and all models incorporate different assumptions. Depending on the modellers’ knowledge, experience, and what their aim of modelling is, depends on what these variables and assumptions are. Hence, there exists such a variety of models. Models cannot be categorised as right or wrong, however they can be shown to be more or less predictive in comparison to other models. But even this may hold true for only a certain period of time.


Saturday 28 November 2015

A Momentous Moment in Time

As I’m sure you’re aware, in a few days time the United Nations Framework Convention on Climate Change (UNFCCC) will meet for the twenty-first session of the Conference of Parties (COP21). Over 190 nations will gather in Paris to discuss a collective agreement on climate change, with the aim of staying within the 2oC global warming target.  
                                            
Despite a GlobeScan poll on behalf of the BBC suggesting that public support for a global deal on climate change has declined, I hope that the outcome will reignite general public conversation and support. It’s important that everyone gets on board with this, as the conference will not only address CO2 emissions, but also how business is conducted in terms of production and farming. Collectively, consumers can have a lot of power which needs to be utilised.

It’s likely there’s also going to be something else quite momentous going on at the same time as the COP21 summit... the peak of the 2015 El Niño event.

The scatter plot (Figure 1) and Table 1 below were produced by the International Research Institute for Climate and Society (IRI), showing forecasts made by a variety of dynamical and statistical models for Sea Surface Temperature (SST) in the Nino 3.4 region (120-170W, 5N-5S) for nine overlapping 3-month periods using observations taken in October 2015.

Figure 1: Niño 3.4 SST Anomaly (oC) predictions from dynamical and statistical models for Nov 2015.
Source: InternationalResearch Institute

As can be seen, for all models, the highest point lies in the Nov(15) – Dec(15) – Jan(16) period. The differences between the model predictions are the result of two main factors. The first, and most obviously, being the construction of the model. Models are not able to fully capture an entire process, they are tools enabling us to analyse, predict, and learn about a process, and hence there will always be a degree of uncertainty within them. The second reason for differences is the actual uncertainty in the forecast. Forecasts made between June and December generally have a better skill* due to the reasoning that this is when El Niño events are active. As might also be expected, skills usually decrease as lead time increases.  
*A forecast “skill” is the level of accuracy a model forecasts in comparison to a reference model.

The figures in Table 1 below give average SST values for the different model types, supporting the trends seen in the scatter plot. 

Table 1: Average Niño 3.4 SST Anomaly (oC) for given models over seasonal 3 month periods in 2015 – 2016.
Source: InternationalResearch Institute

Most of the models predict El Niño SST to slightly increase in strength until about the end of the year, and then gradually weaken through 2016. Although the El Niño will be steadily dying away, it is expected to still be a strong force through the first 4 or so months of 2016. The associated probabilities with these predictions is at least 99% up to around Feb 16, then exponentially falls away to about 50% by May – July 16.

Figure 2 below shows a longer term trend of model predictions, emphasising the strength of the El Niño which we are currently experiencing. It also shows the model forecasts for the past 21 months in addition to the Nov 2015 predictions (as seen in Figure 1). It allows us to see the evolution of model predictions as more data is gathered from the progressing conditions. Models are continually updated and re-run when new data is obtained to produce up-to-date forecasts and analysis of the processes at play.


Figure 2: Niño 3.4 SST Anomaly (oC) predictions from dynamical and statistical models for 22 months.
Source: InternationalResearch Institute

The high SST anomalies that are being seen mean that the ocean is unable to absorb as much heat as it otherwise would during these months. This factor along with CO2 levels breaking records and reaching a massive 400.35ppmis what is causing the expectation that 2015 will be the hottest year on record.

Join me next time when we’ll look at what dynamical models and statistical models are, the differences between them, and why these different types of models exist for the modelling of ENSO.