I attended the Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, also known as Adjoint Workshop, which took place in Aveiro, Portugal between 1st and 6th July 2018. This opportunity was given to me due to funding for early career researchers from the Engineering and Physical Sciences Research Council (EPSRC) Data Assimilation for the Resilient City (DARE) project in the UK. All recipients of this fund that were participating for the first time in the workshop were invited to attend the pre-workshop day of tutorials, presenting sensitivity analysis and data assimilation fundamentals geared to the early career researchers. I would like to thank to EPSRC DARE award committee and the organizers of the Adjoint Workshop for finding me worthy of this award.
Currently I’m a post graduate student at the Brazilian National Institute for Space Research (INPE) and have been visiting the Global Modeling and Assimilation Office (GMAO) of the American National Aeronautics and Space Administration (NASA) for almost one year as part of my PhD comparing two approaches to obtain what is known as the observation impact measure. This measure is obtained as a direct application of sensitivity in data assimilation and basically is a measure of how much each observation helps to improve the short-range forecasts. In Meteorology, specifically in numerical weather prediction, these observations are represented by the global observing system, which includes observations obtained from a number of in situ (e.g., radiosondes, and surface observations) and remote sensed observations (e.g., satellite sensors). During my visit, I’ve been working under the supervision of Ricardo Todling from NASA/GMAO comparing results from two strategies for assessing the impact of observations on forecasts using data assimilation system available at NASA/GMAO: one based on the traditional adjoint technique, another based on ensembles. Preliminary results from this comparison were presented during the Adjoint Workshop.
The Adjoint Workshop provided a perfect environment for early career researchers interact with experts in the field from all around the world. The attendance at the workshop has helped me engage healthy discussions about my work and data assimilation in general. The full programme with abstracts and presentations is available at the workshop web site: https://www.morgan.edu/adjoint_workshop
Thanks to everyone who contributed to this workshop.
Supported by the DARE project, I and a few others from the University of Reading recently attended the weeklong workshop on sensitivity analysis and data assimilation in meteorology and oceanography (a.k.a. the Adjoint workshop) in Aveiro, Portugal.
The week consisted of 60 talks on a variety of selected topic areas including sensitivity analysis and general theoretical data assimilation. I presented the latest results from my PhD research in this topic area and discussed the benefits of using globally convergent methods in variational data assimilation (VarDA) problems. Variational data assimilation combines two sources of information, a mathematical model and real data (e.g. satellite observations).
The overall aim of my research is to investigate the latest mathematical advances in optimisation to understand whether the solution of VarDA problems could be improved or obtained more efficiently through the use of alternative optimisation methods, whilst keeping computational cost and calculation time to a minimum. A possible application of the alternative methods would be to estimate the initial conditions for a weather forecast where the dynamical equations in this case include the physics of the earth system. Weather forecasting has a short time window (the forecast will no longer be useful after the weather event occurs) and so it is important to investigate alternative methods that provide an optimal solution in the given time.
The VarDA problem is known in numerical optimisation as a nonlinear least-squares problem which is solved using an iterative method – a method which takes an initial guess of the solution and then generates a sequence of better guesses at each step of the algorithm. The problem is solved in VarDA as a series of linear least-squares (simpler) problems using a method equivalent to the Gauss-Newton optimisation method. The Gauss-Newton method is not globally convergent in the sense that the method does not guarantee convergence to a stationary point given any initial guess. This is the motivation behind the investigation of newly developed, advanced numerical optimisation methods such as globally convergent methods which use safeguards to guarantee convergence from an arbitrary starting point. The use of such methods could enable us to obtain an improvement on the estimate of the initial conditions of a weather forecast within the limited time and computational cost available.
The conference brought together many key figures in weather forecasting as well as those new to the field such as myself, providing us with the opportunity to learn from each other during the talks and poster session. I had the advantage of presenting my talk on the first day, allowing me to spend the rest of the week receiving feedback from the attendees who were eager to discuss ideas and make suggestions for future work. The friendly atmosphere of the workshop made it easier as an early-career researcher to freely and comfortably converse with those more senior during the breaks.
I would like to thank the DARE project for funding my attendance at the workshop and the organising committee for hosting such an insightful event.
Luca Cantarello is an PhD student at the University of Leeds. He received funding from the DARE training fund to attend Data Assimilation tutorials at the Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, 1-6 July 2018, Aveiro, Portugal. Here he writes about his experience.
Since I started my PhD project at the University of Leeds as a NERC DTP student a few months ago, I have been reflecting on the importance of not feeling too alone in doing science, exactly like in the everyday life. The risk of feeling isolated while doing research can very much apply to all PhD students, but it may be particularly relevant to cases like mine, as very few people are dealing with Data Assimilation in my university.
In this sense, joining the last week’s 11th Adjoint workshop on sensitivity analysis and Data Assimilation in Meteorology and Oceanography in Aveiro has been an excellent opportunity and I am very grateful to the University of Reading and the DARE project for having helped me to take part in it, I received funding from the DARE project which enabled me to attend.
In Aveiro I could enjoy the company and the support of a vast community of scientists, all willing to share their findings and discuss problems and needs with their peers. In the room there was an impressive synergy among many researchers who had attended the same workshop several times in the past, despite it has been held only every second or third year.
The photograph is of the hotel where the adjoint workshop was held.
The workshop has been an important training opportunity for me as I am still in the process of learning, but also an occasion to revive my motivation with new stimuli and ideas before getting to the heart of my PhD in the coming two years.
During the poster session I took part in, I got useful feedback and comments about my project (supervised by Onno Bokhove and Steve Tobias at the University of Leeds and by Gordon Inverarity at the Met Office), in which I am trying to understand how satellite observations at different spatial scales impact on a Data Assimilation scheme. I will bring back to Leeds all the hints and the suggestions I have collected, hoping to attend the next adjoint meeting in a few years and being able to tell people the progress I have achieved in the meantime.
Jemima M. Tabeart is an PhD student at the University of Reading in the Mathematics of Planet Earth Centre for Doctoral Training, shehas received funding from the DARE training fund to attend Data Assimilation tutorials at the Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, 1-6 July 2018, Aveiro, Portugal. Here she writes about her research work.
In order to produce the best weather forecast possible, we want to make use of all available sources of information. This means combining observations of the world around us at the current time with a computer model that can fill in the gaps where we have no observations, by using known laws of physics to evolve observations from the past. This combination process is called data assimilation, and our two data sources (the model and observations) are weighted by our confidence in how accurate they are. This means that knowledge about errors in our observations is really important for getting good weather forecasts. This is especially true where we expect errors between different observations to be related, or correlated.
Caption: An image of the satellite MetOp-B which hosts IASI (Infrared Atmospheric Sounding Interferometer) – an instrument that I have been using as an example to test new mathematical techniques to allow correlated errors to be used inexpensively in the Met Office system. Credit: ESA AOES Medialab MetOp-B image.
Why do such errors occur? No observation will be perfect: there might be biases (e.g. a thermometer that measures everything 0.5℃ too hot), we might not be measuring variables that are used in a numerical model, and converting observations introduces an error (this is the case with satellite observations), and we might be using high density observations that can detect phenomena that our model cannot (e.g. intense localised rainstorms might not show up if our model can only represent objects larger than 5km). Including additional observation error correlations means we can use observation data more intelligently and even extract extra information, leading to improvements in forecasts.
However, these observation error correlations cannot be calculated directly – we instead have to estimate them. Including these estimates in our computations is very expensive, so we need to find ways of including this useful error information in a way that is cheap enough to produce new forecasts every 6 hours! I research mathematical techniques to adapt error information estimates for use in real-world systems.
Caption: Error correlation information for IASI instrument. Dark colours indicate stronger relationships between errors for different channels of the instrument – often strong relationships occur between variables that measure similar things. We want to keep this structure, but change the values in a way that makes sure our computer system still runs quickly.
At the workshop I’ll be presenting new work that tests some of these methods using the Met Office system. Although we can improve the time required for our computations, using different error correlation information alters other parts of the system too! As we don’t know “true” values, it’s hard to know whether these changes are good, bad or just different. I’m looking forward to talking with scientists from other organisations who understand this data and can provide insight into what these differences mean. Additionally, as these methods are already being used to produce forecasts at meteorological centres internationally, discussions about the decision process and impact of different methods are bound to be illuminating!
Takuya Kurihana has received funding from the DARE training fund to attend Data Assimilation tutorials at the Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, 1-6 July 2018, Aveiro, Portugal. Here he writes about himself and his research.
What if we could more accurately predict what atmospheric phenomenon will happen in the next minute, hour and day using the current limited information. This scientific question has inspired me to be being involved in the research activity since undergraduate student. I am Takuya Kurihana, a Meteorology MS student in the University of Tsukuba under the supervision of Dr. Hiroshi L. Tanaka, and an incoming Computer Science PhD student in the University of Chicago. My current research focuses on 1. How to improve the accuracy of weather forecasting: “Predictability”, and 2. How to make use of a massive amount of dense meteorological dataset for data assimilation. With developing new application for purpose 2, I am now researching the impact of using a large amount of atmospheric observation as much as we can towards the daily scale weather forecast.
Regarding to the improvement predictability, as a previous article explained by Zak Bell, Making the Most of Uncertain Urban Observations , data assimilation plays an imperative role in numerical weather prediction because the longer we run a numerical weather forecasting model, the larger the error of forecast grows up. This is because the uncertainty. Even if we use the most precise model, this tendency would not change more or less. But, applying the data assimilation methods can minimize the error by installing observation into the optimization process Fig.1 is an example experiment about an advantage of data assimilation. Therefore, we have to gather a variety of denser observation data from both horizontally and vertically wider range of points in real operation. Other than land observation (Figure 2) , sondes, and buoys, recent satellite observation (Figure 3) , which provide us much richer and denser information, have been utilized in operational data assimilation processes.
The spatially condensed satellite data, however, causes one problem in the current data assimilation methods. The issue is that too much dense data will rather deteriorate the quality of assimilation products based on previous researches. Simply put, we have to leave out large proportion of these data: “Thinning”, even while the technology of meteorological satellites is advancing. Moreover, there are several resource limitations to prepare the forecasting since we are not afford to compute endlessly, and the performance and size of computer are constrained. In order to make use of larger proportion of these data while not reducing assimilation quality, the spatial interpolation, so called super-observation (SO) procedure are developed. As one SO system, I proposed a new algorithm which could deal with a massive amount of satellite big data efficiently and speedily within a cloud-resolving model (Nonhydrostatic ICosahedral Atmospheric Model; NICAM) grid coordination. The algorithm primary targets to reduce “Do/For Loop” iteration process to find the nearest model grid location, which can also skip the computation by a complex observation operator.
Which is better Thinning or SO? Although this would be controversial discussion among meteorologists, I would like to give one example in the Workshop on Sensitivity Analysis and Data Assimilation in Portugal. While the new application should be tested in further numerical experiments through my master research project, I ponder that we should consider a more efficient usage of these meteorological “Big Data” in the near future. Through the attendance at the workshop, I would like to discuss my application and its effect on the data assimilation, as well as receive fruitful advice from cutting edge researchers.
Figure 1. These timeseries of trajectories imply a small difference between two initial conditions finally ends up completely varied behaviors. Blue is No data assimilation from 200 time steps, and Red is data assimilation Lorenz63 Trajectory. Demo above by Takuya Kurihana.
Figure. 2 This map shows the sparse location of land observation points
Figure 3 This map displays a dense information distribution by satellite observation. The upper immage was observation distribution by polar orbiting satellite, the lower one was that of geostationary satellite.
The Dare team went on a field trip last month! It was a well planned and executed trip – as you would expect from a group of mathematicians. It was also a very interesting trip for us since most of us have only ever used data (e.g. for improving forecasts) not collected it. Even better Tewkesbury area has become a sort of benchmark for testing new data assimilation methods, ideas, tools, observations, etc, and so many of us have worked with LisFlood numerical model (developed by a team led by Prof. Paul Bates at the University of Bristol) over the Tewkesbury domain. We have seen the river runs in the model outputs, watched the rivers Avon and Severn go out of banks in our plots, and investigated various SAR images of the area but we have never been to the area. We generally do not need to visit the area when working with the models, however, now that there was a chance to do so, it was no surprise that many of us were keen to go. And we did go like ‘d’ A-team:
However we had a more important reason for visiting too – we were going to the Tewkesbury area to collect metadata from a number of river cameras located near Tewkesbury town. These river cameras are high definition webcams owned and serviced by Farson Digital Ltd in various location over the UK. We had recently discovered that six of such cameras are within the LisFlood model domain and have captured the November 2012 floods in the area. With the permission from the Farson Digital Ltd, we have obtained hourly daylight images of the floods from 21st November 2012 to 5th of December 2012. Hence, the aim our trip was to obtain accurate (with errors of no more than few centimeters) positional information (i.e. latitude, longitude, height) of the cameras themselves as well as the positional information of a number of markers in the images for each of the cameras. We need this information to extract as accurately as possible water extents and water depth from these images using image processing tools (which we are currently working on).
To take these measurements we had borrowed some tools from the Department of Geography at the University of Reading. We used a differential GPS tool (GNSS) to very accurately (on order of few centimeters) measured the position of a given point in 3D space, that is its latitude, longitude, and height above the sea level, however, it had to be used on the ground (e.g. could not measure remote or high points such as building corners where some cameras were mounted) and not be too close to buildings or large trees. To measure remote and high points we used Total Station, which allowed us to shoot a laser beam to the desired point to measure its 3D position in space.
We had planned to visit all six cameras within the space of the two days 16th and17th of April, however, despite our best plans and fantastic organisation skills we were too ambitious with our time and we had to drop the camera furthest from our base – the Bewdley camera (see map with camera positions in figure 2). Thus, on our first day, we took measurements from Wyre Piddle, Evesham, and Digglis Lock cameras, spotting ourselves live on the Farson Digital Ltd site.
We returned to our base – the Tewkesbury Park Hotel, to be joined by the Ensemble team from the Lancaster University. Ensemble Project is lead by Prof. Gordon Blair, and as Dare is funded by the EPSRC Senior Fellowship in Digital Technology for Living with Environmental Change. It was very interesting to meet the Ensemble project team and learn more in-depth about their work, future interests, and scope for the collaboration.
On our second day, the Dare team visited the Tewkesbury camera while the Ensemble team learned more about the purpouse of the data collection and the Novermber 2012 floods in the area. Then we all jointly measured a large number of points at the Strensham Lock. In 2012 we all would have been totally sumberged in water in this picture since the flood waters completely swallowed the island on which the house is standing flooding the building along with it.
Our grand finale was the meeting with the director of the Farson Digital Ltd Glyn Howells as well as a number of stakeholders who have commissioned the cameras we visited. It was very interesting for us to learn how the network of the river cameras was born from the need to know and understand the current state of the river for a variety of river users – fishermen, campers, boaters, etc. Also, how these cameras have become invaluable assets to many stakeholders for various reasons – greatly reducing the number of river condition related phone enquiries, monitoring river bank and bridge conditions, and so on.
Now a month later we have downloaded and processed the data we collected from these stations. In figure 7 we have plotted the data points we took at the Tewkesbury site, owned both by the Environmental Agency and Tewkesbury Marina (both of which we greatly thank for their support and assistance before and during our trip, especially to Steve Edgar from EA and Simon Amos and Bruno from Tewkesbury Marina). In the figure, the red dots are the camera positions – pre-2016 and current camera positions, and the black dots are all the other measurements we took using both the TotalStation and GNSS tools, which are plotted against the Environmental Agency lidar data with 1m horizontal resolution.
We are currently working on extracting the water extent from these images which we then will use to produce water depth observations. Our final aim is to see how much forecast improvement such rich source of observations offer, in particular, before the rising limb of the flood.
We are very thankful to Glyn Howells and the various stakeholders for permitting us to use of the images, allowing us to take the necessary measurements, assisting us on the sites, and joining at the workshop!
This year started by attending the final Maths Foresees general assembly which showcased the diverse research and outreach activities funded by the network since its launch. The assembly took place in Leeds between 8-10th of January 2018 and also included updates from the Environmental Modelling in Industry study group held in 2017. Nearly a year ago now, I also took part in this study group and joined the challenge posed by SWECO (presented by James Franklin) on hydraulic modelling of collection networks for civil engineering.
It was Gavin Ester (UCL), our group leader seen writing in the figure above, who gave the update in the assembly on the findings of our group in his presentation “Hydraulic modelling of collection networks for civil engineering”. You can also read my original blog article about the challenge “Sewer network challenge at MathsForesees study group 2017”.
The three days of the final assembly were full of interesting talks (of which many you can find on the event page) with a number of breakout groups each day discussing issues on: flood control, urban meteorology, and future funding strategies. I and Dr Sarah Dance from DARE team attended the general assembly and gave a joint presentation about use of the data assimilation in urban environments from understanding observation errors to improving flood forecasts, including a call for pilot projects. You can can find our presentations here and here.
Over the course of these three days we saw many interesting presentations on flood forecasting, decision making using uncertain forecasts, theory development of dune formation, multi-scale modelling for urban weather, modelling thg wave dynamics and much more. Sara Lombardo (Loughborough University) presentated overview and her findings on ‘Outreach project: Giant waves in the ocean: from sea monsters to science’, which generated a heated discussion from most of the participants. Sara throught her outreach work uncovered the importance of engaging school children in scientific subjects right from the early years while they are in a primary school to keep children’s interest in science alive throughout their school years; thus not alienating majority of children by the begining of the secondary school thinking that they are not good enough to do mathematics or other STEM subjects.
The discussion that followed Sara’s presentation highlighted the importance of development and use of outreach tools at schools and local communities to bridge the link between academics and the public, allowing general public to experience the science. One such outreach tool is the flood demonstrator Wetropolis developed by Prof. Onno Bokhove (University of Leeds), of which the new version was also showcased at the final general assembly, see a Tweet below by Dr A. Chen.
The Maths Foresees network was established in May 2015 under the EPSRC Living with Environmental Change (LWEC) umbrella to forge strong links between researchers in the applied mathematics and environmental science communities and end-users of environmental research. In the final assembly it was evident such links are very valuable for both academics and industries alike. Much more needs to be done to allow such collaboration to flourish, as Andy Moores from Environmental Agency in his presentation “A view from an EA Research Perspective” said – there needs to be a restaurant, a nourishing environment, for a relationship to blossom and be sustained.
Through the energetic discussion what followed Andy Moores talk, it was obvious that everyone present have benefited taking part in the Maths Foresees network. The network has provided a very fruitful ground where academia and industry can meet to discuss their problems, exchange ideas, allowing both sides to take advantage of eachothers experience, knowledge, and tools to solve real world problems. It was felt very strongly that networks such as Maths Foresees providing this nourishing middle ground are necessary to sustain and further collaborations between academia, industry, and local community.
The DARE team organised a workshop on data science for high impact weather and flood prediction, held by the river at the lovely University of Reading Greenlands Campus in Henley-on-Thames, 20-22 Nov 2017. The workshop objectives were to enable discussion and exchange of expertise at the boundary between digital technology, data science and environmental hazard modelling, including
Data assimilation and data science for flood forecasting and risk planning
Data assimilation and data science for high impact weather forecasting
Smart decision making using environmental data
The meeting was attended by over 30 participants from 5 different countries. We had some great presentations ( to be made available on this webpage) and discussion. We came up with some recommendations to help promote and deliver research and business applications in the digital technology-environmental hazard area. We plan to write a meeting report detailing these recommendations that we hope will be published in a peer-reviewed international journal. Watch this space!
This trilateral workshop between the UK, Uruguay, and Brazil brought together researchers from a wide range of disciplines including mathematicians, computer scientists, meteorologists, statisticians, biologists, geoscientists, oceanographers and physicists. Most attendees were early career researchers, and much of the workshop was spent in working groups with the aim of discussing future research topics concerning adaptation, mitigation and resilience in the context of climate change. Each group created a set of open problems suitable for future international collaborative projects on the timescale of 2-4 years.
Our working group title was Modeling and Computer Architecture. This broad topic allowed for both discussions of general areas of interest such as model reduction, and accounting for correlated correlation and model errors within a data assimilation framework, but also more specific problems relating to landslides and hydrology. These discussions were narrowed down into multiple project proposals, with open questions providing concrete suggestions for avenues of interest which will hopefully be taken forward.
As well as having the opportunity to discuss important research questions, finding out more about the specific issues faced in Brazil was extremely interesting. Our Brazilian hosts were very generous with their time, telling us all about the use of renewables, the impact of flooding in terms of disease and how landslide prediction needs are different to those in the USA and Europe. As conferences and papers can sometimes feel very Northern hemisphere dominated, it was valuable to get a perspective on NWP and climate challenges in the Southern hemisphere.
After the conference, a few of the UK party were lucky enough to spend a few days on holiday in Rio. This was a great way to round off our first trip to the Southern hemisphere and a week of great scientific and cultural discussion.
In the middle of September scientists from all round the world converged on a holiday resort in Florianopolis, Brazil for the Seventh World Meteorological Organization Symposium on Data Assimilation. This symposium takes place roughly every four years and brings together data assimilation scientists from operational weather and ocean forecasting centres, research institutes and universities. With 75 talks and four poster sessions, there was a lot of science to fit in to the four and a half days spent there.
The first day began with presentation of current plans by various operational centres, and both similarities and differences became apparent. It is clear that many centres are moving towards data assimilation schemes that are a mixture of variational and ensemble methods, but the best way of doing this is far from certain. This was apparent from just the first two talks, in which the Met Office and Meteo-France set out their different strategies for the next few years. For anyone who thought that science always provides clear-cut answers, here was an example of where the jury is still out! Many other talks covered similar topics, including the best way to get information from small samples of ensemble forecasts in large systems.
In a short blog post such as this, it is impossible to discuss the wide range of topics that were discussed in the rest of the week, ranging from theoretical aspects of data assimilation to practical implementations. Subjects included challenges for data assimilation at convective scales in the atmosphere, ocean data assimilation, assimilation of new observation types (including winds from radar observations of insects, lightning and radiances from new satellite instruments) and measuring the impact of observations. Several talks proposed development of new, advanced data assimilation methods – particle filters, Gaussian filtering and a hierarchical Bayes filter were all covered. Of particular interest was a presentation on data assimilation using neural networks, which achieved comparable results to an ensemble Kalman filter at a small fraction of the computational cost. This led to a long discussion at the end of the day as to whether neural networks may be a way forward for data assimilation. The final session on the last day covered a variety of different applications of data assimilation, including assimilation of soil moisture, atmospheric composition measurements and volcanic ash concentration, as well as application to coupled atmosphere-ocean models and to carbon flux inversion.
Outside the scientific programme the coffee breaks (with mountains of Brazilian cheese bread provided!) and the social events, such as the caipirinha tasting evening and the conference dinner, as well as the fact of having all meals together, provided ample opportunity for discussion with old acquaintances and new. I came home excited about the wide range of work being done on data assimilation throughout the world and enthusiastic to continue tackling some of the challenges in our research in Reading.
The full programme with abstracts is available at the conference web site, where presentation slides will also be eventually uploaded: