1st NCEO-GSSTI Data Assimilation and Earth Observation Training Course

1st NCEO-GSSTI Data Assimilation and Earth Observation Training Course

by Javier Amezcua

25 November 2019

I spent the last week in Accra, the capital of Ghana. It was incredibly hot and stuffy for this time of the year (minimum 26C, maximum 31C), which is natural if we consider this city is only 5N of the Equator. The food was delicious (I stuffed myself with joloff rice and fried fish) and I enjoyed sunsets when colonies of bats flew over the city.

In this trip I was accompanied by Ewan Pinnington and Tristan Quaife from University of Reading, and Jose Gomez-Dans from University College London. We were in a mission for the UK National Centre of Earth Observation (NCEO), to which the four of us belong. Our mission was to deliver a training course in data assimilation and Earth observation for the young Ghana Space Science and Technology Institute (GSSTI). This institute is located in the northern outskirts of Accra in the campus of the School of Nuclear and Allied Sciences. The participants of the course included people from GSSTI, the Ghana Statistical Institute, the Ghana Meteorological service, and a member of the United Nations Food and Agriculture Organisation (FAO).

This course is part of the continuous collaboration between scientists of the UK and Ghana under the Official Development Assistance (ODA). This program exhorts developed countries to dedicate a percentage of their gross domestic product (GDP) as aid to help foster prosperity in developing countries. This scheme was started by the Organisation for Economic Co-operation and Development (OECD). A country can participate directly with monetary aid, but also through knowledge and expertise. Our training course belongs to the latter category.

In the course I went through the fundamental aspects of data assimilation: definining the estimation and forecasting problem, revising some basic concepts of probability and statistics, and emphasizing the role of Bayes’ Theorem as a central aspect of data assimilation. I then explained some of the basic families of data assimilation methods: variational and Kalman-based. We did some computer experiments with a toy model in order to illustrate some ideas.

My colleague Jose Gomez-Dans then presented something more specific to fulfill the needs of our audience. In particular, people were quite interested in using satellite observations to infer the conditions of crops in the north of Ghana, and then use land-surface models to predict the yield at the end of the season. These models contain information about the biology of the crops, human activities, and they are forced by meteorological products. He helped the participants run some experiments remotely from some computers from UCL with observations from the Sentinel Mission of the European Space Agency (ESA).

We had a great week and it the course was very well received by the participants. It was rewarding to see science transcending the ever tighter borders, institutions opening doors instead of closing them, and people collaborating instead of fighting. We hope to continue our collaboration with GSSTI, and we are planning on coming back in 2021.

My Met Office Placement

My Met Office Placement

by Laura Mansfield


This summer, I spent 10 weeks on a placement at the Met Office, in Exeter. This was part of the Mathematics of Planet Earth training programme and started with a week of lectures and lab sessions thanks to Professor Rupert Klein from the Freie Universität Berline, with guest lectures from UK academics and Met Office staff. The theme was “Multiscale analysis of atmosphere-ocean flows and related numerical issues” with topics covering scales in geophysical flows and asymptotic analysis. I learned a lot about how to approach geophysical problems and found the lectures to be the perfect balance of mathematics and physical intuition.

I spent the remaining 9 weeks in the Informatics Lab, who are a team of technologists, scientists and designers who work to innovate, explore and demonstrate new ideas, particularly to make data useful. I explored how probabilistic programming languages could be used in climate and weather modelling, which also gave me the chance to learn how to build simple climate models in Python. I presented some of this work at a seminar in the Met Office and wrote a few blog posts on my progress (see those here: an introduction to probabilistic programming , an application with differential equation modelling and simple climate modelling)

The working environment in the lab was very different from a PhD office. I found that colleagues have a genuine interest in what other team members are working on and partner or team work to solve problems was common. While I was there, I also gained some tips on how to work better, including better coding practises, how to distribute tasks across computing resources and how to visualise data more effectively. I will definitely be taking a lot of this back to my PhD with me!

Outside of work, I also enjoyed life in Devon. We generally had great weather and Exeter is a lovely city to spend time in. I also took a few trips down to the beach, to surrounding villages and to Dartmoor. Plus, I can’t really complain about the views from the Informatics Lab.

My view from the Informatics Lab on a sunny day

Thanks to Rachel Prudden and everyone at the Informatics Lab who took me in for 2 months and to Mathematics of Planet Earth and the Met Office for making this happen.

Summer school on Data Assimilation and its applications in oceanography, hydrology, risk&safety and reservoir engineering, 2019

Summer school on Data Assimilation and its applications in oceanography, hydrology, risk&safety and reservoir engineering, 2019

by Haonan Ren, PhD student in Atmosphere, Oceans & Climate, University of Reading
August 14, 2019

From 22nd July to 2nd August, the Summer School on Data Assimilation and its applications in oceanography, hydrology, risk&safety and reservoir engineering was held in the Faculty of Mechanics, Polytechnic University of Timisoara, Romania. This two-week summer school has been organized every two years since 2009, and targets primarily students and researchers at an early stage of their career with/without previous experience in data assimilation. In the 6th DA summer school, there were 35 participants from universities, research institutes and industry from all over the world.

The goal of this summer school is to gather the experts in the field of data assimilation from different disciplines (statistics, pure mathematics, engineering, etc), and use their knowledge to educate so that the participants can get some basic knowledge of data assimilation and its applications and have a taste of the advantages of using the data assimilation in different fields. Furthermore, the participants can also work hands-on with academic and commercial dedicated software, and have extensive discussions and exchange ideas with the instructors and other participants. The lectures in the first week focused on the theoretical framework of data assimilation. The lectures started with some basic concepts and derivations of Kalman Filter (KF), including the motivation of using data assimilation in different fields. Then, a Monte-Carlo formulation (ensemble) of KF was introduced, Ensemble Kalman Filter (EnKF), including the necessary processes needed when using EnKF in practicals, such as localization and inflation.

The lectures for rest of the first week demonstrated another method of data assimilation, Particle Filter (PF), and showed the general ideas of data assimilation for chaotic systems and dynamical system. Each day, after a morning of intensive lectures, there was a two-hour practice in the afternoon, in which the participants were given some exercises based on Baye’s Theorem and got the opportunities of running data assimilation schemes on simple models using different programming frameworks. The practicals were strongly connected to the lectures, so that the students could have a better understanding of data assimilation.

The summer school arranged lunches at a local restaurant which was walking-distance from the university, and the organizers of the summer school booked local restaurants near the city centre of Timisoara at the end of the day, so that both the instructors and students could get some relaxation after an exhausting day, and enjoy the local cuisine and cold beers. During the weekend of the first week, the summer school provided a trip around the border of Romania, which involved hiking and sightseeing of the natural landscape of Romania.














After a relaxing weekend, the following week concentrated on the applications of data assimilation in different areas. The instructors started with some fundamental knowledge about computer science in different programming languages, followed by demonstrating the numerical schemes for numerical models. Then, the lectures specifically looked at the applications of data assimilation on the ocean and climate models. During the lectures, the instructors also gave some basic knowledge about oceanography and climate, which gave the students a better insight into the models for a real world application. In the second week, there were several lectures discussing the application of the Ensemble Kalman Smoother (EnKS) and other methods in reservoir engineering (oil and gas), and decision-making problems. And at the end of the final week, the lectures were introducing the field of big data, and the geomechanical applications of data assimilation scheme.

This summer school offered a fulfilling experience about data assimilation, both in theoretical framework and practical applications, to all the participants. And for both instructors and students, the summer school also provided an opportunity to discuss their work and change opinions and experience.

I would like to thank the EPSRC DARE project and Prof. Sarah Dance for the funding that enabled me to attend this summer school.

Data assimilation training at the University of Reading

Data assimilation training at the University of Reading

by Amos Lawless

In March 2019 the Data Assimilation Research Centre at the University of Reading organised a 4-day training course in data assimilation, in collaboration with the National Centre for Earth Observation, ECMWF and the DARE project. The course was attended by 24 early-career researchers from 10 different countries, including scientists from universities, research institutes and industry.

The aim of the course was to give students a solid grounding in the theory of data assimilation methods, as well as the opportunity to apply data assimilation methods to a range of numerical models. The first day of the course saw a general introduction to data assimilation, followed by a more in-depth look at variational methods, both from a theoretical and practical point of view. A computer practical session in the afternoon gave students the opportunity to deepen their understanding by running a variational scheme on a simple numerical model. The day ended with an ice-breaker event, allowing attendees to discuss their particular research projects and their interest in data assimilation over a drink and some nibbles.

The remainder of the course looked at the theory and practice of other data assimilation methods, each supported by computer practical sessions, including the ensemble Kalman filter on day 2, hybrid methods on day 3 and the particle filter on the final day. In between students were treated to two lectures on practical applications: PhD student Jemima Tabeart spoke of her work looking at observation error correlations in the Met Office 1DVar data assimilation system, while research fellow Polly Smith spoke about coupled atmosphere-ocean data assimilation. At the end of day 3 a group meal was organised at the Zerodegrees microbrewery restaurant in the centre of Reading, giving further opportunity for informal discussions of the course material and how students could use the ideas in their own projects. At the end of the final day, after attendees were presented with their course certificates, the staff were also presented with a gift – A 900-piece Lego wind turbine from the Danish attendees. So, if you don’t hear from us for a while, you will know why!

All lecture notes from the course and material for the computer practicals are available to download from the course web site





ISDA2019 in Japan

ISDA2019 in Japan

by Dr Natalie Douglas, University of Surrey and Dr Alison Fowler, University of Reading and NCEO

ISDA2019, the 7th International Symposium for Data Assimilation 2019, was hosted in Kobe, Japan this year from the 21st to the 24th January at the RIKEN Center for Computational Science – home of the K-computer. Attended by over 100 research scientists, the conference boasted a guest list of inspiring speakers and poster presenters from all over the globe. Topics of current relevance invoking enthusiastic discussion included Big Data Assimilation, Uncertainty Quantification, Satellite and Coupled DA, Multi-Scale Processes and DA in Broader Applications to name a few.

“I thoroughly recommend attending ISDA to anyone working in Data Assimilation. This was my first conference abroad, it was hugely informative and extremely well organised. Not only that, I had enormous amounts of fun getting to know and even making good friends with a lot of the key players in my field.” – Dr Natalie Douglas from the University of Surrey, UK.

“The ISDA provided a fascinating overview of the latest developments in Data Assimilation from around the world. It included a diverse range of applications from supernova astrophysics to my more familiar area of meteorology. I found the chance to spend a week with other scientists hugely beneficial to my work. After the symposium I enjoyed an extended visit to RIKEN to continue discussions on the efficient use of high-volume observations in their home-developed rapid-update-forecasting system. The aim of this state-of-the-art system is to provide advanced warnings of the most extreme rainfall events that can evolve in the matter of minutes. Each year in Japan, such events result in a multitude of deaths and wider devastation, and so such a system is sorely needed. Bringing this knowledge back to the UK may prove greatly beneficial as we prepare for the effects of a changing climate.”  – Dr Alison Fowler from the University of Reading, UK and NCEO.

From Germany to Brazil: on climate risk communication

by Javier García-Pintado

Last week, on 22-23 October 2018, around 230 scientists from the three ocean and climate related clusters of excellence in northern Germany met in Berlin in the joint conference on Ocean – Climate – Sustainability Research Frontiers. The participants brought in lively discussions within the context of scientific and societal action towards ocean and climate research. Apart from the discussions more oriented toward the basic climate science and technical aspects, from a personal standpoint (perhaps because of its distance from my own work), I found a number of presentations from “The Future Ocean” cluster in Kiel, which include scholars from politics, social science, philosophy and international law most interesting. Some of these presentations offered a window on the connection between climate change and global and local politics in countries (e.g.; as tropical islands in the Indian ocean, who generally rely on external aid) most affected by increasing sea levels and coastal erosion. In common, this class of talks indicated a need for improving the communication of climate and natural risk science to society. Actually, a huge component of the unpredictability in future climate projections comes from the societal component.

However, as analyzed in one talk in the conference, it seems that, ultimately, public opinion is mostly driven by what is shown on TV, and TV, public offer is in turn mostly driven by the economic powers. Thus, as described the writer Jose Luis Sampedro more than 6 years ago, “public opinion” (defined in Wikipedia as consisting of the “desires, wants, and thinking of the majority of the people”), is in reality the “opinion of the media” or the “opinion of the economic powers”. This clearly connects to the results of Brazil elections just yesterday and the new presidency, and so to the derived very uncertain future of the Amazon management. Apart from the risks to biodiversity, a further deforestation of the Amazon rainforest would make it impossible to cut carbon pollution and the aspirational target of no more than 1.5ºC global warming above pre-industrial temperatures set in the Paris climate agreement. Brazilian people (and they are not alone) seem either oblivious to the problem or convinced that they are not affected by it (even, as from a friend’s personal communication last week, it appears that some people in Brazil sadly believe climate change is an European hoax to take control over their rainforest). Generally rising sea levels and increased storm surge risks, as well as the extra energy accumulated in the Earth system in general (and ocean in particular, boosting atmospheric convection and associated flood risks), will surely lead to a further demand of online, continuously updated, risk information to face emergency situations in the future city. One can wish the best for Brazil and the Amazon, which is the best for the world. In any case, let’s hope that Copacabana is not swallowed in the sea before Rio is transformed into a resilient city.

Machine learning and data assimilation

Machine learning and data assimilation

by Rossella Arcucci

Imagine a world where it is possible to accurately predict the weather, climate, storms, tsunami and other computational intensive problems in real time from your laptop or even mobile phone – if one has access to a supercomputer then to be able to predict at unprecedented scales/detail. This is the long term aim of our work on Data Assimilation with Machine Learning at the Data Science Institute (Imperial College London, UK) and as such, we believe, it will be a key component of future Numerical Forecasting systems.

We proved that the integration of machine learning with Data assimilation can increase the reliability of prediction, reducing errors by including information with an actual physical meaning from observed data. The resulting cohesion of machine learning and data assimilation is then blended in a future generation of fast and more accurate predictive models. This integration is based on the idea of using machine learning to learn the past experiences of an  assimilation process. This follows the principle of Bayesian approach.

Edward Norton Lorenz stated “small causes can have larger effects”, the so called butterfly effect. Imagine a world where it is possible to catch “small causes” in real time and predict effects in real time as well. To know to act! A world where science works with continuously learning from observation.

Figure 1. Comparison of the Lorenz system trajectories obtained by the use of Data Assimilation (DA) and by the integration of machine learning with Data assimilation (DA+NN)

Using ‘flood-excess volume’ to quantify and communicate flood mitigation schemes

Using ‘flood-excess volume’ to quantify and communicate flood mitigation schemes

by Tom Kent

  1. Background

Urban flooding is a major hazard worldwide, brought about primarily by intense rainfall and exacerbated by the built environment we live in. Leeds and Yorkshire are no strangers when it comes to the devastation wreaked by such events.  The last decade alone has seen frequent flooding across the region, from the Calder Valley to the city of York, while the Boxing Day floods in 2015 inundated central Leeds with unprecedented river levels recorded along the Aire Valley. The River Aire originates in the Yorkshire Dales and flows roughly eastwards through Leeds before merging with the Ouse and Humber rivers and finally flowing into the North Sea. The Boxing Day flood resulted from record rainfall in the Aire catchment upstream of Leeds. To make matters worse, near-record rainfall in November meant that the catchment was severely saturated and prone to flooding in the event of more heavy rainfall. The ‘Leeds City Region flood review’ [1] subsequently reported the scale of the damage: Over 4,000 homes and almost 2,000 businesses were flooded with the economic cost to the City Region being over half a billion pounds, and the subsequent rise in river levels allowed little time for communities to prepare.”

The Boxing Day floods and the lack of public awareness around the science of flooding led to the idea and development of the flood-demonstrator ‘Wetropolis’ (see Onno Bokhove’s previous DARE blog post). Wetropolis is a tabletop model of an idealised catchment that illustrates how extreme hydroclimatic events can cause a city to flood due to peaks in groundwater and river levels following random intense rainfall, and in doing so conceptualises the science of flooding in a way that is accessible to and directly engages the public. It also provides a scientific testing environment for flood modelling, control and mitigation, and data assimilation, and has inspired numerous discussions with flood practitioners and policy makers.

These discussions led us in turn to reconsider and analyse river flow data as a basis for assessing and quantifying flood events and various potential and proposed flood-mitigation measures. Such measures are generally engineering-based (e.g., storage reservoirs, defence walls) or nature-based (e.g., tree planting and peat restoration, ‘leaky’ woody-debris dams); a suite of these different measures constitutes a catchment- or city-wide flood-mitigation scheme. We aim to communicate this analysis and resulting flood-mitigation assessment in a concise and straightforward manner in order to assist decision-making for policy makers (e.g., city councils and the Environment Agency) and inform the general public.

  1. River data analysis and ‘flood excess volume’

Rivers in the UK are monitored by a dense network of gauges that measure and record the river level (also known as water stage/depth) – typically every 15 minutes – at the gauge location. There are approximately 1500 gauging stations in total and the flow data are collated by the Environment Agency and freely available to download. Shoothill’s GaugeMap website (http://www.gaugemap.co.uk/) provides an excellent tool for visualising this data in real-time and browsing historic data in a user-friendly manner. Flood events are often characterised by their peak water level, i.e. the maximum water depth reached during the flood, and statistical return periods. However, this flood-peak conveys neither the duration or the volume of the flood, and the meaning of return period is often difficult to grasp for non-specialists. Here, we analyse river-level data from the Armley gauge station – located 2km upstream from Leeds city centre – and demonstrate the concept of ‘flood-excess volume’ as an alternative diagnostic for flood events.

The bottom-left panel of Figure 1 (it may help to tilt your head left!) shows the river level (h, in metres) as a function of time in days around Boxing Day 2015. The flood peaked at 5.21m overnight on the 26th/27th December, rising over 4m in just over 24 hours. Another quantity of interest in hydrology is the discharge (Q), or flow rate, the volume of water passing a location per second. This is usually not measured directly but can be determined via a rating curve, a site-specific empirical function Q = Q(h) that relates the water level to discharge. Each gauge station has its own rating curve which is documented and updated by the Environment Agency. The rating curve for Armley is plotted here in the top-left panel (solid curve) with the dashed line denoting its linear approximation; the shaded area represents the estimated error in the relationship, which is expected to grow considerably when in flood (i.e., for high values of h). Applying the rating curve to the river level data yields the discharge time series (top-right panel, called a hydrograph) for Armley. Note that the rating curve error means that the discharge time series has some uncertainty (grey shaded zone around the solid curve). We see that the peak discharge is 330-360m3/s, around 300m3/s higher than 24 hours previously. Since discharge is the volume of water per second, the area under the discharge curve is the total volume of water. To define the flood-excess volume, we introduce a threshold height hT above which flooding occurs. For this flood event, local knowledge and photographic evidence suggested that flooding commenced when river levels exceeded 3.9m, so here we choose the threshold hT = 3.9m. This is marked as a vertical dotted line on the left panels: following it up to the rating curve, one obtains a threshold discharge QT = Q(hT) = 219.1m3/s (horizontal dotted line). The flood-excess volume (FEV) is the blue shaded area between the discharge curve and the threshold discharge QT. Put simply, this is the volume of water that caused flooding, and therefore the volume of flood water one seeks to mitigate (i.e., reduce to zero) by the cumulative effect of various flood mitigation measures. The FEV, here around 9.34 million cubic metres, has a corresponding flood duration Tf = 32 hours, which is the time between the river level first exceeding hT and subsequently dropping below hT.  The rectangle represents the mean approximation to the FEV, which, in the absence of frequent flow data, can be used to estimate the FEV (blue shaded area) based on a mean water level (hm) and discharge (Qm).

  1. Using FEV in flood-mitigation assessment

Having defined FEV in this way, we are motivated by the following questions: (i) how can we articulate FEV (which is often many million cubic metres) in a more comprehensible manner? And (ii) what fraction of the FEV is reduced, and at what cost, by a particular flood-mitigation measure? Our simple yet powerful idea is to express the FEV as a 2-metre-deep square ‘flood-excess lake’ with side-length on the order of a kilometer. For example, we can break down the FEV for Armley as follows: 9.34Mm3 = (21502 x 2)m3, which is a 2-metre-deep lake with side-length 2.15km. This is immediately easier to visualise and goes some way to conveying the magnitude of the flood. Since the depth is shallow relative to the side-length, we can view this ‘flood-excess lake’ from above as a square and ask what fraction of this lake is accounted for by the potential storage capacity of flood-mitigation measures. The result is a graphical tool that (i) contextualises the magnitude of the flood relative to the river and its valley/catchment and (ii) facilitates quick and direct assessment of the contribution and value of various mitigation measures.

Figure 2 shows the Armley FEV as a 2m-deep ‘flood-excess lake’ (not to scale). Given the size of the lake as well as the geography of the river valley concerned, one can begin to make a ballpark estimate of the contribution and effectiveness of flood-plain enhancement for flood storage and other flood-mitigation measures. Superimposed on the bird’s-eye view of the lake in figure 3 are two scenarios from our hypothetical Leeds Flood Alleviation Scheme II (FASII+) that comprise: (S1) building flood walls and using a flood-water storage site at Calverley; and (S2) building (lower) flood walls and using a flood-water storage site at Rodley.

The available flood-storage volume is estimated to be 0.75Mm3 and 1.1Mm3 at Calverley and Rodley respectively, corresponding to 8% and 12% of the FEV. The absolute cost of each measure is incorporated, as well as the value (i.e., cost per 1% of FEV mitigated), while the overall contribution in terms of volume is simply the fraction of the lake covered by each measure. It is immediately evident that both schemes provide 100% mitigation and that (S1) provides better value (£0.75M/1% against £0.762M/1%). We can also see that although storage sites offer less value than building flood walls, a larger storage site allows lower flood walls to be built which may be an important factor for planning departments. In this case, although (S2) is more expensive overall, the Rodley storage site (£1.17M/1%) is better value than Calverley storage site (£1.25M/1%) and means that flood walls are lower. It is then up to policy-makers to make the best decision based on all the available evidence and inevitable constraints. Our hypothetical FASII+ comprises 5 scenarios in total and is reported in [2].

The details are in some sense of secondary importance here; the take-home message is that the FEV analysis offers a protocol to optimise the assessment of mitigation schemes, including cost-effectiveness, in a comprehensible way. In particular, the graphical presentation of the FEV as partitioned flood-excess lakes facilitates quick and direct interpretation of competing schemes and scenarios, and in doing so communicates clearly the evidence needed to make rational and important decisions. Finally, we stress that FEV should be used either prior to or in tandem with more detailed hydrodynamic numerical modelling; nonetheless it offers a complementary way of classifying flood events and enables evidence-based decision-making for flood-mitigation assessment. For more information, including case studies in the UK and France, see [2,3,4]; summarised in [5].


[1] West Yorkshire combined Authority 2016. Leeds city region flood review report. December 2016. https://www.the-lep.com/media/2276/leeds-city-region-flood-review-report-final.pdf

[2] O. Bokhove, M. Kelmanson, T. Kent (2018a): On using flood-excess volume in flood mitigation, exemplified for the River Aire Boxing Day Flood of 2015. Subm. evidence-synthesis article: Proc. Roy. Soc. A. See also: https://eartharxiv.org/stc7r/

[3] O. Bokhove, M. Kelmanson, T. Kent, G. Piton, J.-M. Tacnet (2018b): Communicating nature-based solutions using flood-excess volume for three UK and French river floods. In prep. See also the preliminary version on: https://eartharxiv.org/87z6w/

[4] O. Bokhove, M. Kelmanson, T. Kent (2018c): Using flood-excess volume in flood mitigation to show that upscaling beaver dams for protection against extreme floods proves unrealistic. Subm. evidence-synthesis article: Proc. Roy. Soc. A. See also: https://eartharxiv.org/w9evx/

[5] ‘Using flood-excess volume to assess and communicate flood-mitigation schemes’, poster presentation for ‘Evidence-based decisions for UK Landscapes’, 17-18 September 2018, INI, Cambridge. Available here: http://www1.maths.leeds.ac.uk/~amttk/files/INI_sept2018.pdf

Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography

Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography

by Fabio L. R. Diniz    fabio.diniz@inpe.br

I attended the Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, also known as Adjoint Workshop, which took place in Aveiro, Portugal between 1st and 6th July 2018. This opportunity was given to me due to funding for early career researchers from the Engineering and Physical Sciences Research Council (EPSRC) Data Assimilation for the Resilient City (DARE) project in the UK. All recipients of this fund that were participating for the first time in the workshop were invited to attend the pre-workshop day of tutorials, presenting sensitivity analysis and data assimilation fundamentals geared to the early career researchers. I would like to thank to EPSRC DARE award committee and the organizers of the Adjoint Workshop for finding me worthy of this award.

Currently I’m a post graduate student at the Brazilian National Institute for Space Research (INPE) and have been visiting the Global Modeling and Assimilation Office (GMAO) of the American National Aeronautics and Space Administration (NASA) for almost one year as part of my PhD comparing two approaches to obtain what is known as the observation impact measure. This measure is obtained as a direct application of sensitivity in data assimilation and basically is a measure of how much each observation helps to improve the short-range forecasts. In Meteorology, specifically in numerical weather prediction, these observations are represented by the global observing system, which includes observations obtained from a number of in situ (e.g., radiosondes, and surface observations) and remote sensed observations (e.g., satellite sensors). During my visit, I’ve been working under the supervision of Ricardo Todling from NASA/GMAO comparing results from two strategies for assessing the impact of observations on forecasts using data assimilation system available at NASA/GMAO: one based on the traditional adjoint technique, another based on ensembles. Preliminary results from this comparison were presented during the Adjoint Workshop.

The Adjoint Workshop provided a perfect environment for early career researchers interact with experts in the field from all around the world. The attendance at the workshop has helped me engage healthy discussions about my work and data assimilation in general. The full programme with abstracts and presentations is available at the workshop web site: https://www.morgan.edu/adjoint_workshop

Thanks to everyone who contributed to this workshop.

Investigating alternative optimisation methods for variational data assimilation

Investigating alternative optimisation methods for variational data assimilation

by Maha Kaouri

Supported by the DARE project, I and a few others from the University of Reading recently attended the weeklong workshop on sensitivity analysis and data assimilation in meteorology and oceanography (a.k.a. the Adjoint workshop) in Aveiro, Portugal.

The week consisted of 60 talks on a variety of selected topic areas including sensitivity analysis and general theoretical data assimilation. I presented the latest results from my PhD research in this topic area and discussed the benefits of using globally convergent methods in variational data assimilation (VarDA) problems. Variational data assimilation combines two sources of information, a mathematical model and real data (e.g. satellite observations).

The overall aim of my research is to investigate the latest mathematical advances in optimisation to understand whether the solution of VarDA problems could be improved or obtained more efficiently through the use of alternative optimisation methods, whilst keeping computational cost and calculation time to a minimum. A possible application of the alternative methods would be to estimate the initial conditions for a weather forecast where the dynamical equations in this case include the physics of the earth system. Weather forecasting has a short time window (the forecast will no longer be useful after the weather event occurs) and so it is important to investigate alternative methods that provide an optimal solution in the given time.

The VarDA problem is known in numerical optimisation as a nonlinear least-squares problem which is solved using an iterative method – a method which takes an initial guess of the solution and then generates a sequence of better guesses at each step of the algorithm. The problem is solved in VarDA as a series of linear least-squares (simpler) problems using a method equivalent to the Gauss-Newton optimisation method. The Gauss-Newton method is not globally convergent in the sense that the method does not guarantee convergence to a stationary point given any initial guess. This is the motivation behind the investigation of newly developed, advanced numerical optimisation methods such as globally convergent methods which use safeguards to guarantee convergence from an arbitrary starting point. The use of such methods could enable us to obtain an improvement on the estimate of the initial conditions of a weather forecast within the limited time and computational cost available.

The conference brought together many key figures in weather forecasting as well as those new to the field such as myself, providing us with the opportunity to learn from each other during the talks and poster session. I had the advantage of presenting my talk on the first day, allowing me to spend the rest of the week receiving feedback from the attendees who were eager to discuss ideas and make suggestions for future work. The friendly atmosphere of the workshop made it easier as an early-career researcher to freely and comfortably converse with those more senior during the breaks.

I would like to thank the DARE project for funding my attendance at the workshop and the organising committee for hosting such an insightful event.