wCROWN: Workshop on Crowdsourced data in Numerical Weather Prediction

wCROWN: Workshop on Crowdsourced data in Numerical Weather Prediction

by Sarah Dance

On 4-5 December 2018, the Danish Meteorological Institute (DMI) is hosted a workshop on crowdsourced data in numerical weather prediction (NWP), attended by Joanne Waller and Sarah Dance from the DARE project.  DMI  hosted this workshop with two aims, 1) Gather experts on crowdsourced data focused on NWP, to start a network of people working on the subject and 2) producing a white paper directing the research community towards best practices and guidelines on the subject.

Presenters from the University of Washington (Seattle), University of Reading and several operational weather centres including the Met Office (UK), German Weather Service (DWD), Meteo France, ECMWF, KNMI and EUMETNET gave us status reports on their research into using crowdsourced data, opportunistic data and citizen science. We discussed the issues arising in the use of such data and agreed to write a workshop report together to feed into EUMETNET activities. We also enjoyed a fascinating tour of the DMI  operational forecasters centre.

Machine learning and data assimilation

Machine learning and data assimilation

by Rossella Arcucci

Imagine a world where it is possible to accurately predict the weather, climate, storms, tsunami and other computational intensive problems in real time from your laptop or even mobile phone – if one has access to a supercomputer then to be able to predict at unprecedented scales/detail. This is the long term aim of our work on Data Assimilation with Machine Learning at the Data Science Institute (Imperial College London, UK) and as such, we believe, it will be a key component of future Numerical Forecasting systems.

We proved that the integration of machine learning with Data assimilation can increase the reliability of prediction, reducing errors by including information with an actual physical meaning from observed data. The resulting cohesion of machine learning and data assimilation is then blended in a future generation of fast and more accurate predictive models. This integration is based on the idea of using machine learning to learn the past experiences of an  assimilation process. This follows the principle of Bayesian approach.

Edward Norton Lorenz stated “small causes can have larger effects”, the so called butterfly effect. Imagine a world where it is possible to catch “small causes” in real time and predict effects in real time as well. To know to act! A world where science works with continuously learning from observation.

Figure 1. Comparison of the Lorenz system trajectories obtained by the use of Data Assimilation (DA) and by the integration of machine learning with Data assimilation (DA+NN)

Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography

Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography

by Fabio L. R. Diniz    fabio.diniz@inpe.br

I attended the Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, also known as Adjoint Workshop, which took place in Aveiro, Portugal between 1st and 6th July 2018. This opportunity was given to me due to funding for early career researchers from the Engineering and Physical Sciences Research Council (EPSRC) Data Assimilation for the Resilient City (DARE) project in the UK. All recipients of this fund that were participating for the first time in the workshop were invited to attend the pre-workshop day of tutorials, presenting sensitivity analysis and data assimilation fundamentals geared to the early career researchers. I would like to thank to EPSRC DARE award committee and the organizers of the Adjoint Workshop for finding me worthy of this award.

Currently I’m a post graduate student at the Brazilian National Institute for Space Research (INPE) and have been visiting the Global Modeling and Assimilation Office (GMAO) of the American National Aeronautics and Space Administration (NASA) for almost one year as part of my PhD comparing two approaches to obtain what is known as the observation impact measure. This measure is obtained as a direct application of sensitivity in data assimilation and basically is a measure of how much each observation helps to improve the short-range forecasts. In Meteorology, specifically in numerical weather prediction, these observations are represented by the global observing system, which includes observations obtained from a number of in situ (e.g., radiosondes, and surface observations) and remote sensed observations (e.g., satellite sensors). During my visit, I’ve been working under the supervision of Ricardo Todling from NASA/GMAO comparing results from two strategies for assessing the impact of observations on forecasts using data assimilation system available at NASA/GMAO: one based on the traditional adjoint technique, another based on ensembles. Preliminary results from this comparison were presented during the Adjoint Workshop.

The Adjoint Workshop provided a perfect environment for early career researchers interact with experts in the field from all around the world. The attendance at the workshop has helped me engage healthy discussions about my work and data assimilation in general. The full programme with abstracts and presentations is available at the workshop web site: https://www.morgan.edu/adjoint_workshop

Thanks to everyone who contributed to this workshop.

Investigating alternative optimisation methods for variational data assimilation

Investigating alternative optimisation methods for variational data assimilation

by Maha Kaouri

Supported by the DARE project, I and a few others from the University of Reading recently attended the weeklong workshop on sensitivity analysis and data assimilation in meteorology and oceanography (a.k.a. the Adjoint workshop) in Aveiro, Portugal.

The week consisted of 60 talks on a variety of selected topic areas including sensitivity analysis and general theoretical data assimilation. I presented the latest results from my PhD research in this topic area and discussed the benefits of using globally convergent methods in variational data assimilation (VarDA) problems. Variational data assimilation combines two sources of information, a mathematical model and real data (e.g. satellite observations).

The overall aim of my research is to investigate the latest mathematical advances in optimisation to understand whether the solution of VarDA problems could be improved or obtained more efficiently through the use of alternative optimisation methods, whilst keeping computational cost and calculation time to a minimum. A possible application of the alternative methods would be to estimate the initial conditions for a weather forecast where the dynamical equations in this case include the physics of the earth system. Weather forecasting has a short time window (the forecast will no longer be useful after the weather event occurs) and so it is important to investigate alternative methods that provide an optimal solution in the given time.

The VarDA problem is known in numerical optimisation as a nonlinear least-squares problem which is solved using an iterative method – a method which takes an initial guess of the solution and then generates a sequence of better guesses at each step of the algorithm. The problem is solved in VarDA as a series of linear least-squares (simpler) problems using a method equivalent to the Gauss-Newton optimisation method. The Gauss-Newton method is not globally convergent in the sense that the method does not guarantee convergence to a stationary point given any initial guess. This is the motivation behind the investigation of newly developed, advanced numerical optimisation methods such as globally convergent methods which use safeguards to guarantee convergence from an arbitrary starting point. The use of such methods could enable us to obtain an improvement on the estimate of the initial conditions of a weather forecast within the limited time and computational cost available.

The conference brought together many key figures in weather forecasting as well as those new to the field such as myself, providing us with the opportunity to learn from each other during the talks and poster session. I had the advantage of presenting my talk on the first day, allowing me to spend the rest of the week receiving feedback from the attendees who were eager to discuss ideas and make suggestions for future work. The friendly atmosphere of the workshop made it easier as an early-career researcher to freely and comfortably converse with those more senior during the breaks.

I would like to thank the DARE project for funding my attendance at the workshop and the organising committee for hosting such an insightful event.

Accounting for Unresolved Scales Error with the Schmidt-Kalman Filter at the Adjoint Workshop

Accounting for Unresolved Scales Error with the Schmidt-Kalman Filter at the Adjoint Workshop

by Zak Bell

This summer I was fortunate enough to receive funding from the DARE training fund to attend the 11th workshop on sensitivity analysis and data assimilation in meteorology and oceanography. This workshop, also known as the adjoint workshop, provides academics and students with an occasion to present their research of the inclusion of Earth observations into mathematical models. Due to the friendly environment of the workshop, I was presented with an excellent opportunity to condense a portion of my research into a poster and discuss it with other attendees at the workshop.

Data assimilation is essentially a way to link theoretical models of the world to the actual world. This is achieved by finding the most likely state of a model through observations of it. A state for numerical weather prediction will typically be comprised of variables such as wind, moisture and temperature at a specific time. One way to assimilate observations is through the Kalman Filter. The Kalman Filter assimilates one observation at a time and through consideration of the errors of our models, computations and observations we can determine the most probable state of our model and use this state to better model or forecast the real world.

It goes without saying that a better understanding of the errors involved in the observations would lead to a better forecast. Therefore, research into observation errors is a large and ongoing area of interest. My research is on observation error due to unresolved scales in data assimilation which can be broadly described as the difference between what an observation actually observes and a numerical model’s representation of that observation. For example, an observation taken in a sheltered street of a city will have a different value than a numerical model of that city unable to individually represent the spatial scales of each street. To utilize such observations within data assimilation, the unresolved spatial scales must be accounted for in some way.  The method I chose to create a poster for was the Schmidt-Kalman Filter which was originally developed for navigation purposes but has since been the subject of a few studies within the meteorology community on unresolved scales error.

The Schmidt-Kalman Filter accounts for the state- and time-dependence of the error due to unresolved scales through use of the statistics of the unresolved scales. However, to save on computational expense, the unresolved state values will be disregarded. My poster presented a mathematical analysis of a simple example for the Schmidt-Kalman Filter and highlighted its ability to compensate for unresolved scales error. The Schmidt-Kalman filter performs better than a Kalman Filter for just the resolved scales but worse than a Kalman Filter that resolves all scales which is to be expected. Using the feedback from the other attendees and ideas obtained from other presentations at the workshop I will continue to investigate the properties of the Schmidt-Kalman Filter as well as its suitability for urban weather prediction.

Working with other scientists in Data Assimilation

Working with other scientists in Data Assimilation

by Luca Cantarello

Luca Cantarello is an PhD student at the University of Leeds.  He received funding from the DARE  training fund to attend Data Assimilation tutorials at the  Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, 1-6 July 2018, Aveiro, Portugal. Here he writes about  his experience.

Since I started my PhD project at the University of Leeds as a NERC DTP student a few months ago, I have been reflecting on the importance of not feeling too alone in doing science, exactly like in the everyday life. The risk of feeling isolated while doing research can very much apply to all PhD students, but it may be particularly relevant to cases like mine, as very few people are dealing with Data Assimilation in my university.

In this sense, joining the last week’s 11th Adjoint workshop on sensitivity analysis and Data Assimilation in Meteorology and Oceanography in Aveiro has been an excellent opportunity and I am very grateful to the University of Reading and the DARE project for having helped me to take part in it, I received funding from the DARE project which enabled me to attend.

In Aveiro I could enjoy the company and the support of a vast community of scientists, all willing to share their findings and discuss problems and needs with their peers. In the room there was an impressive synergy among many researchers who had attended the same workshop several times in the past, despite it has been held only every second or third year.

 

The photograph is of the hotel where the adjoint workshop was held.

The workshop has been an important training opportunity for me as I am still in the process of learning, but also an occasion to revive my motivation with new stimuli and ideas before getting to the heart of my PhD in the coming two years.

During the poster session I took part in, I got useful feedback and comments about my project (supervised by Onno Bokhove and Steve Tobias at the University of Leeds and by Gordon Inverarity at the Met Office), in which I am trying to understand how satellite observations at different spatial scales impact on a Data Assimilation scheme. I will bring back to Leeds all the hints and the suggestions I have collected, hoping to attend the next adjoint meeting in a few years and being able to tell people the progress I have achieved in the meantime.

 

Producing the best weather forecasts by using all available sources of information

Producing the best weather forecasts by using all available sources of information

Jemima M. Tabeart is an PhD student at the University of Reading in the Mathematics of Planet Earth Centre for Doctoral Training, she has received funding from the DARE  training fund to attend Data Assimilation tutorials at the  Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, 1-6 July 2018, Aveiro, Portugal. Here she writes about  her research work.

In order to produce the best weather forecast possible, we want to make use of all available sources of information. This means combining observations of the world around us at the current time with a computer model that can fill in the gaps where we have no observations, by using known laws of physics to evolve observations from the past. This combination process is called data assimilation, and our two data sources (the model and observations) are weighted by our confidence in how accurate they are. This means that knowledge about errors in our observations is really important for getting good weather forecasts. This is especially true where we expect errors between different observations to be related, or correlated.

 
Caption: An image of the satellite MetOp-B which hosts IASI (Infrared Atmospheric Sounding Interferometer) – an instrument that I have been using as an example to test new mathematical techniques to allow correlated errors to be used inexpensively in the Met Office system.  Credit: ESA AOES Medialab MetOp-B image.

Why do such errors occur? No observation will be perfect: there might be biases (e.g. a thermometer that measures everything 0.5℃ too hot), we might not be measuring variables that are used in a numerical model, and converting observations introduces an error (this is the case with satellite observations), and we might be using high density observations that can detect phenomena that our model cannot (e.g. intense localised rainstorms might not show up if our model can only represent objects larger than 5km). Including additional observation error correlations means we can use observation data more intelligently and even extract extra information, leading to improvements in forecasts.

However, these observation error correlations cannot be calculated directly – we instead have to estimate them. Including these estimates in our computations is very expensive, so we need to find ways of including this useful error information in a way that is cheap enough to produce new forecasts every 6 hours! I research mathematical techniques to adapt error information estimates for use in real-world systems.


Caption: Error correlation information for IASI instrument. Dark colours indicate stronger relationships between errors for different channels of the instrument – often strong relationships occur between variables that measure similar things. We want to keep this structure, but change the values in a way that makes sure our computer system still runs quickly.

At the workshop I’ll be presenting new work that tests some of these methods using the Met Office system. Although we can improve the time required for our computations, using different error correlation information alters other parts of the system too! As we don’t know “true” values, it’s hard to know whether these changes are good, bad or just different. I’m looking forward to talking with scientists from other organisations who understand this data and can provide insight into what these differences mean. Additionally, as these methods are already being used to produce forecasts at meteorological centres internationally, discussions about the decision process and impact of different methods are bound to be illuminating!

Coping with large numbers of observations

Coping with large numbers of observations

Takuya Kurihana has received funding from the DARE training fund to attend Data Assimilation tutorials at the Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, 1-6 July 2018, Aveiro, Portugal. Here he writes about himself and his research. 

 

What if we could more accurately predict what atmospheric phenomenon will happen in the next minute, hour and day using the current limited information. This scientific question has inspired me to be being involved in the research activity since undergraduate student. I am Takuya Kurihana, a Meteorology MS student in the University of Tsukuba under the supervision of Dr. Hiroshi L. Tanaka, and an incoming Computer Science PhD student in the University of Chicago. My current research focuses on 1. How to improve the accuracy of weather forecasting: “Predictability”, and 2. How to make use of a massive amount of dense meteorological dataset for data assimilation. With developing new application for purpose 2, I am now researching the impact of using a large amount of atmospheric observation as much as we can towards the daily scale weather forecast.

 

Regarding to the improvement predictability, as a previous article explained by Zak Bell, Making the Most of Uncertain Urban Observations , data assimilation plays an imperative role in numerical weather prediction because the longer we run a numerical weather forecasting model, the larger the error of forecast grows up. This is because the uncertainty. Even if we use the most precise model, this tendency would not change more or less. But, applying the data assimilation methods can minimize the error by installing observation into the optimization process Fig.1 is an example experiment about an advantage of data assimilation. Therefore, we have to gather a variety of denser observation data from both horizontally and vertically wider range of points in real operation. Other than land observation (Figure 2) [1], sondes, and buoys, recent satellite observation (Figure 3 and Figure 4) [2, 3], which provide us much richer and denser information, have been utilized in operational data assimilation processes.

 

The spatially condensed satellite data, however, causes one problem in the current data assimilation methods. The issue is that too much dense data will rather deteriorate the quality of assimilation products based on previous researches. Simply put, we have to leave out large proportion of these data: “Thinning”, even while the technology of meteorological satellites is advancing. Moreover, there are several resource limitations to prepare the forecasting since we are not afford to compute endlessly, and the performance and size of computer are constrained. In order to make use of larger proportion of these data while not reducing assimilation quality, the spatial interpolation, so called super-observation (SO) procedure are developed. As one SO system, I proposed a new algorithm which could deal with a massive amount of satellite big data efficiently and speedily within a cloud-resolving model (Nonhydrostatic ICosahedral Atmospheric Model; NICAM) grid coordination. The algorithm primary targets to reduce “Do/For Loop” iteration process to find the nearest model grid location, which can also skip the computation by a complex observation operator.

 

Which is better Thinning or SO? Although this would be controversial discussion among meteorologists, I would like to give one example in the Workshop on Sensitivity Analysis and Data Assimilation in Portugal. While the new application should be tested in further numerical experiments through my master research project, I ponder that we should consider a more efficient usage of these meteorological “Big Data” in the near future. Through the attendance at the workshop, I would like to discuss my application and its effect on the data assimilation, as well as receive fruitful advice from cutting edge researchers.

 

Figure 1. These timeseries of trajectories imply a small difference between two initial conditions finally ends up completely varied behaviors. Blue is No data assimilation from 200 time steps, and Red is data assimilation Lorenz63 Trajectory. Demo above by Takuya Kurihana.

Figure. 2  This map shows the sparse location of land observation points

 

Figure 3. Map of the polar-orbiting constellation coverage from one GDAS cycle for 3 polar configurations (taken from Boukabara et al. 2016)

 

Figure 4. Location of all AMVs used in the data assimilation for the UK Met Office model in 2013 (Source: UK Met Office, http://www.eumetrain.org/data/4/438/navmenu.php?tab=2&page=2.0.0).

 

 

[1] https://www.dwd.de/EN/research/weatherforecasting/num_modelling/02_data_assimilation/data_assimilation_node.html

[2] S-A. Boukabara, K. Garrett, K. V. Kumar, “Potential Gaps in the Satellite Observing System Coverage: Assessment of Impact on NOAA’€™s Numerical Weather Prediction Overall Skills”. (2016). Mon. Wea. Rev., 144, 2547–2563, https://doi.org/10.1175/ MWR-D-16-0013.1.

[3] http://www.eumetrain.org/data/4/438/navmenu.php?tab=2&page=2.0.0

 

 

Making the Most of Uncertain Urban Observations

Making the Most of Uncertain Urban Observations

by Zak Bell

I started my PhD at the University of Reading in September of 2017 under the supervision of Sarah Dance and Joanne Waller. My project is in methods of data assimilation which compensate for the uncertainty associated with urban observations. Data assimilation is a method of combining mathematical models with real observations to improve the accuracy of the model. This is used extensively in numerical weather prediction (NWP). This is achieved by obtaining the best possible initial conditions of the model’s variables, also known as the state, through consideration of the uncertainty of the observations but also of the model itself. Urban observations from inexpensive datasets are not fully utilised in NWP models and provide the motivation for this research project.

This is a solar powered cellular weather station and is an example of an instrument able to record weather observations in urban environment[i]

To assimilate urban observations we must first understand the error associated with them. Observation error is comprised of the measurement error due to the instruments making the observations and what is known as representation error. The representation error arises from the discrepancy between the modelled representation of an observation and what is actually observed and can further be divided into three parts: pre-processing error, observation-operator error and the error due to unresolved scales. Pre-processing error is the result of imperfections in the selection and preparation of the observations and observation-operator error is associated with the ability to map the model variables to its observation counterpart. The final part of the representation error, the error due to unresolved scales, is due to NWP models being unable to capture all atmospheric scales and is the motivation for my project.

Error due to unresolved scales is a consequence of how a domain is represented by a mathematical model. A typical way for a model to represent a domain is turn it into a discrete number of grid points at which the model variables will be evaluated. For example, consider the grid below as a representation for a mesoscale scale domain. The two stars represent the positions of two weather stations observing wind temperature and a cold front is shown to be moving through the domain. As we can see, there is one observing instrument within the cold front and one observing instrument outside of the cold front. As both of the observing instruments are within the same grid cell we see that the model does not have a high enough resolution to capture this process. This would result in a scale mismatch error and must be properly compensated for in the data assimilation process to use these observations. In the context of urban environments, the scale mismatch error would be due to the buildings surrounding where the observation is taken. For instance, an observation taken in a sheltered street would produce a different value than one taken on top of a skyscraper.

The standard approach for dealing with scale mismatch error is to include it as part of the observation error covariance matrix. However, there are data assimilation methods which take explicit account of both resolved and unresolved scales. An example of such a method is the Schmidt-Kalman filter[ii]; an adaption of the Kalman filter able to consider the influence of processes not resolved by the model. My project is concerned with obtaining other suitable methods to deal with unresolved scales for the assimilation of urban observations in NWP models.  From this, I hope to determine the best method of data assimilation able to utilise uncertain urban observations for urban weather prediction.

[I] This image is taken from http://www.weathershop.com/cellular_weather_station.html

[ii] https://en.wikipedia.org/wiki/Schmidt-Kalman_filter

 

Overview of the final Maths Foresees general assembly or why we need the restaurant

Overview of the final Maths Foresees general assembly or why we need the restaurant

This year started by attending the final Maths Foresees general assembly which showcased the diverse research and outreach activities funded by the network since its launch. The assembly took place in Leeds between 8-10th of January 2018 and also included updates from the Environmental Modelling in Industry study group held in 2017. Nearly a year ago now, I also took part in this study group and joined the challenge posed by SWECO (presented by James Franklin) on hydraulic modelling of collection networks for civil engineering.

Part of the Sweco team working on the sewer problem at Maths Foresees 2017 study group event

It was Gavin Ester (UCL), our group leader seen writing in the figure above, who gave the update in the assembly on the findings of our group in his presentation “Hydraulic modelling of collection networks for civil engineering”. You can also read my original blog article about the challenge “Sewer network challenge at MathsForesees study group 2017”.

The three days of the final assembly were full of interesting talks (of which many you can find on the event page) with a number of breakout groups each day discussing issues on: flood control, urban meteorology, and future funding strategies. I and Dr Sarah Dance from DARE team attended the general assembly and gave a joint presentation about use of the data assimilation in urban environments from understanding observation errors to improving flood forecasts, including a call for pilot projects. You can can find our presentations here and here.

Over the course of these three days we saw many interesting presentations on flood forecasting, decision making using uncertain forecasts, theory development of dune formation, multi-scale modelling for urban weather, modelling thg wave dynamics and much more. Sara Lombardo (Loughborough University) presentated overview and her findings on ‘Outreach project: Giant waves in the ocean: from sea monsters to science’, which generated a heated discussion from most of the participants. Sara throught her outreach work uncovered the importance of engaging school children in scientific subjects right from the early years while they are in a primary school to keep children’s interest in science alive throughout their school years; thus not alienating majority of children by the begining of the secondary school thinking that they are not good enough to do mathematics or other STEM subjects.

 

The postcard from the joined outreach project by NUSTEM, MathsForesees, and EPSRC given to children who participated in the outreach projects at selected schools to invite their parents to see their child’s work and activities in the outreach project.

The discussion that followed Sara’s presentation highlighted the importance of development and use of outreach tools at schools and local communities to bridge the link between academics and the public, allowing general public to experience the science. One such outreach tool is the flood demonstrator Wetropolis developed by Prof. Onno Bokhove (University of Leeds), of which the new version was also showcased at the final general assembly, see a Tweet below by Dr A. Chen.

The Maths Foresees network was established in May 2015 under the EPSRC Living with Environmental Change (LWEC) umbrella to forge strong links between researchers in the applied mathematics and environmental science communities and end-users of environmental research. In the final assembly it was evident such links are very valuable for both academics and industries alike. Much more needs to be done to allow such collaboration to flourish, as Andy Moores from Environmental Agency in his presentation “A view from an EA Research Perspective” said – there needs to be a restaurant, a nourishing environment, for a relationship to blossom and be sustained.

Through the energetic discussion what followed Andy Moores talk, it was obvious that everyone present have benefited taking part in the Maths Foresees network. The network has provided a very fruitful ground where academia and industry can meet to discuss their problems, exchange ideas, allowing both sides to take advantage of eachothers experience, knowledge, and tools to solve real world problems. It was felt very strongly that networks such as Maths Foresees providing this nourishing middle ground are necessary to sustain and further collaborations between academia, industry, and local community.

 


The featured image

MathsForesees Artists and academics worked with young children to produce artwork relating to non-linear waves. Image taken from @MathsForesees Twitter page.