Summer school on Data Assimilation and its applications in oceanography, hydrology, risk&safety and reservoir engineering, 2019

Summer school on Data Assimilation and its applications in oceanography, hydrology, risk&safety and reservoir engineering, 2019

by Haonan Ren, PhD student in Atmosphere, Oceans & Climate, University of Reading
August 14, 2019

From 22nd July to 2nd August, the Summer School on Data Assimilation and its applications in oceanography, hydrology, risk&safety and reservoir engineering was held in the Faculty of Mechanics, Polytechnic University of Timisoara, Romania. This two-week summer school has been organized every two years since 2009, and targets primarily students and researchers at an early stage of their career with/without previous experience in data assimilation. In the 6th DA summer school, there were 35 participants from universities, research institutes and industry from all over the world.

The goal of this summer school is to gather the experts in the field of data assimilation from different disciplines (statistics, pure mathematics, engineering, etc), and use their knowledge to educate so that the participants can get some basic knowledge of data assimilation and its applications and have a taste of the advantages of using the data assimilation in different fields. Furthermore, the participants can also work hands-on with academic and commercial dedicated software, and have extensive discussions and exchange ideas with the instructors and other participants. The lectures in the first week focused on the theoretical framework of data assimilation. The lectures started with some basic concepts and derivations of Kalman Filter (KF), including the motivation of using data assimilation in different fields. Then, a Monte-Carlo formulation (ensemble) of KF was introduced, Ensemble Kalman Filter (EnKF), including the necessary processes needed when using EnKF in practicals, such as localization and inflation.

The lectures for rest of the first week demonstrated another method of data assimilation, Particle Filter (PF), and showed the general ideas of data assimilation for chaotic systems and dynamical system. Each day, after a morning of intensive lectures, there was a two-hour practice in the afternoon, in which the participants were given some exercises based on Baye’s Theorem and got the opportunities of running data assimilation schemes on simple models using different programming frameworks. The practicals were strongly connected to the lectures, so that the students could have a better understanding of data assimilation.

The summer school arranged lunches at a local restaurant which was walking-distance from the university, and the organizers of the summer school booked local restaurants near the city centre of Timisoara at the end of the day, so that both the instructors and students could get some relaxation after an exhausting day, and enjoy the local cuisine and cold beers. During the weekend of the first week, the summer school provided a trip around the border of Romania, which involved hiking and sightseeing of the natural landscape of Romania.

 

 

 

 

 

 

 

 

 

 

 

 

 

After a relaxing weekend, the following week concentrated on the applications of data assimilation in different areas. The instructors started with some fundamental knowledge about computer science in different programming languages, followed by demonstrating the numerical schemes for numerical models. Then, the lectures specifically looked at the applications of data assimilation on the ocean and climate models. During the lectures, the instructors also gave some basic knowledge about oceanography and climate, which gave the students a better insight into the models for a real world application. In the second week, there were several lectures discussing the application of the Ensemble Kalman Smoother (EnKS) and other methods in reservoir engineering (oil and gas), and decision-making problems. And at the end of the final week, the lectures were introducing the field of big data, and the geomechanical applications of data assimilation scheme.

This summer school offered a fulfilling experience about data assimilation, both in theoretical framework and practical applications, to all the participants. And for both instructors and students, the summer school also provided an opportunity to discuss their work and change opinions and experience.

I would like to thank the EPSRC DARE project and Prof. Sarah Dance for the funding that enabled me to attend this summer school.

Data assimilation training at the University of Reading

Data assimilation training at the University of Reading

by Amos Lawless

In March 2019 the Data Assimilation Research Centre at the University of Reading organised a 4-day training course in data assimilation, in collaboration with the National Centre for Earth Observation, ECMWF and the DARE project. The course was attended by 24 early-career researchers from 10 different countries, including scientists from universities, research institutes and industry.

The aim of the course was to give students a solid grounding in the theory of data assimilation methods, as well as the opportunity to apply data assimilation methods to a range of numerical models. The first day of the course saw a general introduction to data assimilation, followed by a more in-depth look at variational methods, both from a theoretical and practical point of view. A computer practical session in the afternoon gave students the opportunity to deepen their understanding by running a variational scheme on a simple numerical model. The day ended with an ice-breaker event, allowing attendees to discuss their particular research projects and their interest in data assimilation over a drink and some nibbles.

The remainder of the course looked at the theory and practice of other data assimilation methods, each supported by computer practical sessions, including the ensemble Kalman filter on day 2, hybrid methods on day 3 and the particle filter on the final day. In between students were treated to two lectures on practical applications: PhD student Jemima Tabeart spoke of her work looking at observation error correlations in the Met Office 1DVar data assimilation system, while research fellow Polly Smith spoke about coupled atmosphere-ocean data assimilation. At the end of day 3 a group meal was organised at the Zerodegrees microbrewery restaurant in the centre of Reading, giving further opportunity for informal discussions of the course material and how students could use the ideas in their own projects. At the end of the final day, after attendees were presented with their course certificates, the staff were also presented with a gift – A 900-piece Lego wind turbine from the Danish attendees. So, if you don’t hear from us for a while, you will know why!

All lecture notes from the course and material for the computer practicals are available to download from the course web site

https://research.reading.ac.uk/met-darc/ecmwf2019/

 

 

 

ISDA2019 in Japan

ISDA2019 in Japan

by Dr Natalie Douglas, University of Surrey and Dr Alison Fowler, University of Reading and NCEO

ISDA2019, the 7th International Symposium for Data Assimilation 2019, was hosted in Kobe, Japan this year from the 21st to the 24th January at the RIKEN Center for Computational Science – home of the K-computer. Attended by over 100 research scientists, the conference boasted a guest list of inspiring speakers and poster presenters from all over the globe. Topics of current relevance invoking enthusiastic discussion included Big Data Assimilation, Uncertainty Quantification, Satellite and Coupled DA, Multi-Scale Processes and DA in Broader Applications to name a few.

“I thoroughly recommend attending ISDA to anyone working in Data Assimilation. This was my first conference abroad, it was hugely informative and extremely well organised. Not only that, I had enormous amounts of fun getting to know and even making good friends with a lot of the key players in my field.” – Dr Natalie Douglas from the University of Surrey, UK.

“The ISDA provided a fascinating overview of the latest developments in Data Assimilation from around the world. It included a diverse range of applications from supernova astrophysics to my more familiar area of meteorology. I found the chance to spend a week with other scientists hugely beneficial to my work. After the symposium I enjoyed an extended visit to RIKEN to continue discussions on the efficient use of high-volume observations in their home-developed rapid-update-forecasting system. The aim of this state-of-the-art system is to provide advanced warnings of the most extreme rainfall events that can evolve in the matter of minutes. Each year in Japan, such events result in a multitude of deaths and wider devastation, and so such a system is sorely needed. Bringing this knowledge back to the UK may prove greatly beneficial as we prepare for the effects of a changing climate.”  – Dr Alison Fowler from the University of Reading, UK and NCEO.

From Germany to Brazil: on climate risk communication

by Javier García-Pintado

Last week, on 22-23 October 2018, around 230 scientists from the three ocean and climate related clusters of excellence in northern Germany met in Berlin in the joint conference on Ocean – Climate – Sustainability Research Frontiers. The participants brought in lively discussions within the context of scientific and societal action towards ocean and climate research. Apart from the discussions more oriented toward the basic climate science and technical aspects, from a personal standpoint (perhaps because of its distance from my own work), I found a number of presentations from “The Future Ocean” cluster in Kiel, which include scholars from politics, social science, philosophy and international law most interesting. Some of these presentations offered a window on the connection between climate change and global and local politics in countries (e.g.; as tropical islands in the Indian ocean, who generally rely on external aid) most affected by increasing sea levels and coastal erosion. In common, this class of talks indicated a need for improving the communication of climate and natural risk science to society. Actually, a huge component of the unpredictability in future climate projections comes from the societal component.

However, as analyzed in one talk in the conference, it seems that, ultimately, public opinion is mostly driven by what is shown on TV, and TV, public offer is in turn mostly driven by the economic powers. Thus, as described the writer Jose Luis Sampedro more than 6 years ago, “public opinion” (defined in Wikipedia as consisting of the “desires, wants, and thinking of the majority of the people”), is in reality the “opinion of the media” or the “opinion of the economic powers”. This clearly connects to the results of Brazil elections just yesterday and the new presidency, and so to the derived very uncertain future of the Amazon management. Apart from the risks to biodiversity, a further deforestation of the Amazon rainforest would make it impossible to cut carbon pollution and the aspirational target of no more than 1.5ºC global warming above pre-industrial temperatures set in the Paris climate agreement. Brazilian people (and they are not alone) seem either oblivious to the problem or convinced that they are not affected by it (even, as from a friend’s personal communication last week, it appears that some people in Brazil sadly believe climate change is an European hoax to take control over their rainforest). Generally rising sea levels and increased storm surge risks, as well as the extra energy accumulated in the Earth system in general (and ocean in particular, boosting atmospheric convection and associated flood risks), will surely lead to a further demand of online, continuously updated, risk information to face emergency situations in the future city. One can wish the best for Brazil and the Amazon, which is the best for the world. In any case, let’s hope that Copacabana is not swallowed in the sea before Rio is transformed into a resilient city.

Machine learning and data assimilation

Machine learning and data assimilation

by Rossella Arcucci

Imagine a world where it is possible to accurately predict the weather, climate, storms, tsunami and other computational intensive problems in real time from your laptop or even mobile phone – if one has access to a supercomputer then to be able to predict at unprecedented scales/detail. This is the long term aim of our work on Data Assimilation with Machine Learning at the Data Science Institute (Imperial College London, UK) and as such, we believe, it will be a key component of future Numerical Forecasting systems.

We proved that the integration of machine learning with Data assimilation can increase the reliability of prediction, reducing errors by including information with an actual physical meaning from observed data. The resulting cohesion of machine learning and data assimilation is then blended in a future generation of fast and more accurate predictive models. This integration is based on the idea of using machine learning to learn the past experiences of an  assimilation process. This follows the principle of Bayesian approach.

Edward Norton Lorenz stated “small causes can have larger effects”, the so called butterfly effect. Imagine a world where it is possible to catch “small causes” in real time and predict effects in real time as well. To know to act! A world where science works with continuously learning from observation.

Figure 1. Comparison of the Lorenz system trajectories obtained by the use of Data Assimilation (DA) and by the integration of machine learning with Data assimilation (DA+NN)

Using ‘flood-excess volume’ to quantify and communicate flood mitigation schemes

Using ‘flood-excess volume’ to quantify and communicate flood mitigation schemes

by Tom Kent

  1. Background

Urban flooding is a major hazard worldwide, brought about primarily by intense rainfall and exacerbated by the built environment we live in. Leeds and Yorkshire are no strangers when it comes to the devastation wreaked by such events.  The last decade alone has seen frequent flooding across the region, from the Calder Valley to the city of York, while the Boxing Day floods in 2015 inundated central Leeds with unprecedented river levels recorded along the Aire Valley. The River Aire originates in the Yorkshire Dales and flows roughly eastwards through Leeds before merging with the Ouse and Humber rivers and finally flowing into the North Sea. The Boxing Day flood resulted from record rainfall in the Aire catchment upstream of Leeds. To make matters worse, near-record rainfall in November meant that the catchment was severely saturated and prone to flooding in the event of more heavy rainfall. The ‘Leeds City Region flood review’ [1] subsequently reported the scale of the damage: Over 4,000 homes and almost 2,000 businesses were flooded with the economic cost to the City Region being over half a billion pounds, and the subsequent rise in river levels allowed little time for communities to prepare.”

The Boxing Day floods and the lack of public awareness around the science of flooding led to the idea and development of the flood-demonstrator ‘Wetropolis’ (see Onno Bokhove’s previous DARE blog post). Wetropolis is a tabletop model of an idealised catchment that illustrates how extreme hydroclimatic events can cause a city to flood due to peaks in groundwater and river levels following random intense rainfall, and in doing so conceptualises the science of flooding in a way that is accessible to and directly engages the public. It also provides a scientific testing environment for flood modelling, control and mitigation, and data assimilation, and has inspired numerous discussions with flood practitioners and policy makers.

These discussions led us in turn to reconsider and analyse river flow data as a basis for assessing and quantifying flood events and various potential and proposed flood-mitigation measures. Such measures are generally engineering-based (e.g., storage reservoirs, defence walls) or nature-based (e.g., tree planting and peat restoration, ‘leaky’ woody-debris dams); a suite of these different measures constitutes a catchment- or city-wide flood-mitigation scheme. We aim to communicate this analysis and resulting flood-mitigation assessment in a concise and straightforward manner in order to assist decision-making for policy makers (e.g., city councils and the Environment Agency) and inform the general public.

  1. River data analysis and ‘flood excess volume’

Rivers in the UK are monitored by a dense network of gauges that measure and record the river level (also known as water stage/depth) – typically every 15 minutes – at the gauge location. There are approximately 1500 gauging stations in total and the flow data are collated by the Environment Agency and freely available to download. Shoothill’s GaugeMap website (http://www.gaugemap.co.uk/) provides an excellent tool for visualising this data in real-time and browsing historic data in a user-friendly manner. Flood events are often characterised by their peak water level, i.e. the maximum water depth reached during the flood, and statistical return periods. However, this flood-peak conveys neither the duration or the volume of the flood, and the meaning of return period is often difficult to grasp for non-specialists. Here, we analyse river-level data from the Armley gauge station – located 2km upstream from Leeds city centre – and demonstrate the concept of ‘flood-excess volume’ as an alternative diagnostic for flood events.

The bottom-left panel of Figure 1 (it may help to tilt your head left!) shows the river level (h, in metres) as a function of time in days around Boxing Day 2015. The flood peaked at 5.21m overnight on the 26th/27th December, rising over 4m in just over 24 hours. Another quantity of interest in hydrology is the discharge (Q), or flow rate, the volume of water passing a location per second. This is usually not measured directly but can be determined via a rating curve, a site-specific empirical function Q = Q(h) that relates the water level to discharge. Each gauge station has its own rating curve which is documented and updated by the Environment Agency. The rating curve for Armley is plotted here in the top-left panel (solid curve) with the dashed line denoting its linear approximation; the shaded area represents the estimated error in the relationship, which is expected to grow considerably when in flood (i.e., for high values of h). Applying the rating curve to the river level data yields the discharge time series (top-right panel, called a hydrograph) for Armley. Note that the rating curve error means that the discharge time series has some uncertainty (grey shaded zone around the solid curve). We see that the peak discharge is 330-360m3/s, around 300m3/s higher than 24 hours previously. Since discharge is the volume of water per second, the area under the discharge curve is the total volume of water. To define the flood-excess volume, we introduce a threshold height hT above which flooding occurs. For this flood event, local knowledge and photographic evidence suggested that flooding commenced when river levels exceeded 3.9m, so here we choose the threshold hT = 3.9m. This is marked as a vertical dotted line on the left panels: following it up to the rating curve, one obtains a threshold discharge QT = Q(hT) = 219.1m3/s (horizontal dotted line). The flood-excess volume (FEV) is the blue shaded area between the discharge curve and the threshold discharge QT. Put simply, this is the volume of water that caused flooding, and therefore the volume of flood water one seeks to mitigate (i.e., reduce to zero) by the cumulative effect of various flood mitigation measures. The FEV, here around 9.34 million cubic metres, has a corresponding flood duration Tf = 32 hours, which is the time between the river level first exceeding hT and subsequently dropping below hT.  The rectangle represents the mean approximation to the FEV, which, in the absence of frequent flow data, can be used to estimate the FEV (blue shaded area) based on a mean water level (hm) and discharge (Qm).

  1. Using FEV in flood-mitigation assessment

Having defined FEV in this way, we are motivated by the following questions: (i) how can we articulate FEV (which is often many million cubic metres) in a more comprehensible manner? And (ii) what fraction of the FEV is reduced, and at what cost, by a particular flood-mitigation measure? Our simple yet powerful idea is to express the FEV as a 2-metre-deep square ‘flood-excess lake’ with side-length on the order of a kilometer. For example, we can break down the FEV for Armley as follows: 9.34Mm3 = (21502 x 2)m3, which is a 2-metre-deep lake with side-length 2.15km. This is immediately easier to visualise and goes some way to conveying the magnitude of the flood. Since the depth is shallow relative to the side-length, we can view this ‘flood-excess lake’ from above as a square and ask what fraction of this lake is accounted for by the potential storage capacity of flood-mitigation measures. The result is a graphical tool that (i) contextualises the magnitude of the flood relative to the river and its valley/catchment and (ii) facilitates quick and direct assessment of the contribution and value of various mitigation measures.

Figure 2 shows the Armley FEV as a 2m-deep ‘flood-excess lake’ (not to scale). Given the size of the lake as well as the geography of the river valley concerned, one can begin to make a ballpark estimate of the contribution and effectiveness of flood-plain enhancement for flood storage and other flood-mitigation measures. Superimposed on the bird’s-eye view of the lake in figure 3 are two scenarios from our hypothetical Leeds Flood Alleviation Scheme II (FASII+) that comprise: (S1) building flood walls and using a flood-water storage site at Calverley; and (S2) building (lower) flood walls and using a flood-water storage site at Rodley.

The available flood-storage volume is estimated to be 0.75Mm3 and 1.1Mm3 at Calverley and Rodley respectively, corresponding to 8% and 12% of the FEV. The absolute cost of each measure is incorporated, as well as the value (i.e., cost per 1% of FEV mitigated), while the overall contribution in terms of volume is simply the fraction of the lake covered by each measure. It is immediately evident that both schemes provide 100% mitigation and that (S1) provides better value (£0.75M/1% against £0.762M/1%). We can also see that although storage sites offer less value than building flood walls, a larger storage site allows lower flood walls to be built which may be an important factor for planning departments. In this case, although (S2) is more expensive overall, the Rodley storage site (£1.17M/1%) is better value than Calverley storage site (£1.25M/1%) and means that flood walls are lower. It is then up to policy-makers to make the best decision based on all the available evidence and inevitable constraints. Our hypothetical FASII+ comprises 5 scenarios in total and is reported in [2].

The details are in some sense of secondary importance here; the take-home message is that the FEV analysis offers a protocol to optimise the assessment of mitigation schemes, including cost-effectiveness, in a comprehensible way. In particular, the graphical presentation of the FEV as partitioned flood-excess lakes facilitates quick and direct interpretation of competing schemes and scenarios, and in doing so communicates clearly the evidence needed to make rational and important decisions. Finally, we stress that FEV should be used either prior to or in tandem with more detailed hydrodynamic numerical modelling; nonetheless it offers a complementary way of classifying flood events and enables evidence-based decision-making for flood-mitigation assessment. For more information, including case studies in the UK and France, see [2,3,4]; summarised in [5].

References:

[1] West Yorkshire combined Authority 2016. Leeds city region flood review report. December 2016. https://www.the-lep.com/media/2276/leeds-city-region-flood-review-report-final.pdf

[2] O. Bokhove, M. Kelmanson, T. Kent (2018a): On using flood-excess volume in flood mitigation, exemplified for the River Aire Boxing Day Flood of 2015. Subm. evidence-synthesis article: Proc. Roy. Soc. A. See also: https://eartharxiv.org/stc7r/

[3] O. Bokhove, M. Kelmanson, T. Kent, G. Piton, J.-M. Tacnet (2018b): Communicating nature-based solutions using flood-excess volume for three UK and French river floods. In prep. See also the preliminary version on: https://eartharxiv.org/87z6w/

[4] O. Bokhove, M. Kelmanson, T. Kent (2018c): Using flood-excess volume in flood mitigation to show that upscaling beaver dams for protection against extreme floods proves unrealistic. Subm. evidence-synthesis article: Proc. Roy. Soc. A. See also: https://eartharxiv.org/w9evx/

[5] ‘Using flood-excess volume to assess and communicate flood-mitigation schemes’, poster presentation for ‘Evidence-based decisions for UK Landscapes’, 17-18 September 2018, INI, Cambridge. Available here: http://www1.maths.leeds.ac.uk/~amttk/files/INI_sept2018.pdf

Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography

Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography

by Fabio L. R. Diniz    fabio.diniz@inpe.br

I attended the Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, also known as Adjoint Workshop, which took place in Aveiro, Portugal between 1st and 6th July 2018. This opportunity was given to me due to funding for early career researchers from the Engineering and Physical Sciences Research Council (EPSRC) Data Assimilation for the Resilient City (DARE) project in the UK. All recipients of this fund that were participating for the first time in the workshop were invited to attend the pre-workshop day of tutorials, presenting sensitivity analysis and data assimilation fundamentals geared to the early career researchers. I would like to thank to EPSRC DARE award committee and the organizers of the Adjoint Workshop for finding me worthy of this award.

Currently I’m a post graduate student at the Brazilian National Institute for Space Research (INPE) and have been visiting the Global Modeling and Assimilation Office (GMAO) of the American National Aeronautics and Space Administration (NASA) for almost one year as part of my PhD comparing two approaches to obtain what is known as the observation impact measure. This measure is obtained as a direct application of sensitivity in data assimilation and basically is a measure of how much each observation helps to improve the short-range forecasts. In Meteorology, specifically in numerical weather prediction, these observations are represented by the global observing system, which includes observations obtained from a number of in situ (e.g., radiosondes, and surface observations) and remote sensed observations (e.g., satellite sensors). During my visit, I’ve been working under the supervision of Ricardo Todling from NASA/GMAO comparing results from two strategies for assessing the impact of observations on forecasts using data assimilation system available at NASA/GMAO: one based on the traditional adjoint technique, another based on ensembles. Preliminary results from this comparison were presented during the Adjoint Workshop.

The Adjoint Workshop provided a perfect environment for early career researchers interact with experts in the field from all around the world. The attendance at the workshop has helped me engage healthy discussions about my work and data assimilation in general. The full programme with abstracts and presentations is available at the workshop web site: https://www.morgan.edu/adjoint_workshop

Thanks to everyone who contributed to this workshop.

Investigating alternative optimisation methods for variational data assimilation

Investigating alternative optimisation methods for variational data assimilation

by Maha Kaouri

Supported by the DARE project, I and a few others from the University of Reading recently attended the weeklong workshop on sensitivity analysis and data assimilation in meteorology and oceanography (a.k.a. the Adjoint workshop) in Aveiro, Portugal.

The week consisted of 60 talks on a variety of selected topic areas including sensitivity analysis and general theoretical data assimilation. I presented the latest results from my PhD research in this topic area and discussed the benefits of using globally convergent methods in variational data assimilation (VarDA) problems. Variational data assimilation combines two sources of information, a mathematical model and real data (e.g. satellite observations).

The overall aim of my research is to investigate the latest mathematical advances in optimisation to understand whether the solution of VarDA problems could be improved or obtained more efficiently through the use of alternative optimisation methods, whilst keeping computational cost and calculation time to a minimum. A possible application of the alternative methods would be to estimate the initial conditions for a weather forecast where the dynamical equations in this case include the physics of the earth system. Weather forecasting has a short time window (the forecast will no longer be useful after the weather event occurs) and so it is important to investigate alternative methods that provide an optimal solution in the given time.

The VarDA problem is known in numerical optimisation as a nonlinear least-squares problem which is solved using an iterative method – a method which takes an initial guess of the solution and then generates a sequence of better guesses at each step of the algorithm. The problem is solved in VarDA as a series of linear least-squares (simpler) problems using a method equivalent to the Gauss-Newton optimisation method. The Gauss-Newton method is not globally convergent in the sense that the method does not guarantee convergence to a stationary point given any initial guess. This is the motivation behind the investigation of newly developed, advanced numerical optimisation methods such as globally convergent methods which use safeguards to guarantee convergence from an arbitrary starting point. The use of such methods could enable us to obtain an improvement on the estimate of the initial conditions of a weather forecast within the limited time and computational cost available.

The conference brought together many key figures in weather forecasting as well as those new to the field such as myself, providing us with the opportunity to learn from each other during the talks and poster session. I had the advantage of presenting my talk on the first day, allowing me to spend the rest of the week receiving feedback from the attendees who were eager to discuss ideas and make suggestions for future work. The friendly atmosphere of the workshop made it easier as an early-career researcher to freely and comfortably converse with those more senior during the breaks.

I would like to thank the DARE project for funding my attendance at the workshop and the organising committee for hosting such an insightful event.

Accounting for Unresolved Scales Error with the Schmidt-Kalman Filter at the Adjoint Workshop

Accounting for Unresolved Scales Error with the Schmidt-Kalman Filter at the Adjoint Workshop

by Zak Bell

This summer I was fortunate enough to receive funding from the DARE training fund to attend the 11th workshop on sensitivity analysis and data assimilation in meteorology and oceanography. This workshop, also known as the adjoint workshop, provides academics and students with an occasion to present their research of the inclusion of Earth observations into mathematical models. Due to the friendly environment of the workshop, I was presented with an excellent opportunity to condense a portion of my research into a poster and discuss it with other attendees at the workshop.

Data assimilation is essentially a way to link theoretical models of the world to the actual world. This is achieved by finding the most likely state of a model through observations of it. A state for numerical weather prediction will typically be comprised of variables such as wind, moisture and temperature at a specific time. One way to assimilate observations is through the Kalman Filter. The Kalman Filter assimilates one observation at a time and through consideration of the errors of our models, computations and observations we can determine the most probable state of our model and use this state to better model or forecast the real world.

It goes without saying that a better understanding of the errors involved in the observations would lead to a better forecast. Therefore, research into observation errors is a large and ongoing area of interest. My research is on observation error due to unresolved scales in data assimilation which can be broadly described as the difference between what an observation actually observes and a numerical model’s representation of that observation. For example, an observation taken in a sheltered street of a city will have a different value than a numerical model of that city unable to individually represent the spatial scales of each street. To utilize such observations within data assimilation, the unresolved spatial scales must be accounted for in some way.  The method I chose to create a poster for was the Schmidt-Kalman Filter which was originally developed for navigation purposes but has since been the subject of a few studies within the meteorology community on unresolved scales error.

The Schmidt-Kalman Filter accounts for the state- and time-dependence of the error due to unresolved scales through use of the statistics of the unresolved scales. However, to save on computational expense, the unresolved state values will be disregarded. My poster presented a mathematical analysis of a simple example for the Schmidt-Kalman Filter and highlighted its ability to compensate for unresolved scales error. The Schmidt-Kalman filter performs better than a Kalman Filter for just the resolved scales but worse than a Kalman Filter that resolves all scales which is to be expected. Using the feedback from the other attendees and ideas obtained from other presentations at the workshop I will continue to investigate the properties of the Schmidt-Kalman Filter as well as its suitability for urban weather prediction.

Working with other scientists in Data Assimilation

Working with other scientists in Data Assimilation

by Luca Cantarello

Luca Cantarello is an PhD student at the University of Leeds.  He received funding from the DARE  training fund to attend Data Assimilation tutorials at the  Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, 1-6 July 2018, Aveiro, Portugal. Here he writes about  his experience.

Since I started my PhD project at the University of Leeds as a NERC DTP student a few months ago, I have been reflecting on the importance of not feeling too alone in doing science, exactly like in the everyday life. The risk of feeling isolated while doing research can very much apply to all PhD students, but it may be particularly relevant to cases like mine, as very few people are dealing with Data Assimilation in my university.

In this sense, joining the last week’s 11th Adjoint workshop on sensitivity analysis and Data Assimilation in Meteorology and Oceanography in Aveiro has been an excellent opportunity and I am very grateful to the University of Reading and the DARE project for having helped me to take part in it, I received funding from the DARE project which enabled me to attend.

In Aveiro I could enjoy the company and the support of a vast community of scientists, all willing to share their findings and discuss problems and needs with their peers. In the room there was an impressive synergy among many researchers who had attended the same workshop several times in the past, despite it has been held only every second or third year.

 

The photograph is of the hotel where the adjoint workshop was held.

The workshop has been an important training opportunity for me as I am still in the process of learning, but also an occasion to revive my motivation with new stimuli and ideas before getting to the heart of my PhD in the coming two years.

During the poster session I took part in, I got useful feedback and comments about my project (supervised by Onno Bokhove and Steve Tobias at the University of Leeds and by Gordon Inverarity at the Met Office), in which I am trying to understand how satellite observations at different spatial scales impact on a Data Assimilation scheme. I will bring back to Leeds all the hints and the suggestions I have collected, hoping to attend the next adjoint meeting in a few years and being able to tell people the progress I have achieved in the meantime.