Investigating the uncertainty of weather radar data

Investigating the uncertainty of weather radar data

This blog describes the work of Masters student Vasiliki Kouroupaki, carried out in collaboration with the UK Met Office.

UK weather radar network

In numerical weather prediction, nowcast, hindcast and forecast models can be improved through data assimilation. Data assimilation is the technique which combines observations with output from a previous short-range forecast (background) to produce an optimal estimate of the state of the atmosphere (analysis).

Radar reflectivity observations are assimilated by the Met Office in order to provide up-to-date information about rainfall in the initial conditions for UK weather forecasts. In assimilation, observations are assigned weights according to their error statistics. Depending on the kind of observation, there are different factors or processes which can result in errors. In order to have an optimal analysis these errors must be correctly specified. However, due to the fact that the true errors are not known their statistics need to be estimated. In this work, the uncertainties of radar reflectivity observations assimilated into the Met Office UKV model are examined using a diagnostic technique. Data come from the operational UKV model with hourly cycling 4D-VAR or from trial experiments four times per day. The diagnostic is based on combinations of observation-minus- background, observation-minus-analysis and background-minus-analysis differences. The results show that observation error variances are higher for Winter (1 Dec 2017-18 Jan 2018) than for Summer (16 Jul-16 Aug 2018) and that they increase for higher reflectivity values. Further investigations classified the data by beam elevation and by radar ID. These showed that for values of beam elevation between 0.5- 1.0 and 3.0-4.0 degrees the error variance had greater values. Also, error statistics for different radars were positively correlated with the mean reflectivity observed by each radar.

Further investigation of observation error statistics in the assimilation could improve the initial conditions and thereby operational forecasts for convective rainfall events.

Operational meteorology for the public good, or for profit?

Operational meteorology for the public good, or for profit?

Most countries have a national weather service, funded by the government. A key role is to provide a public weather service, publishing forecasts to help make everyday business and personal decisions, as well as providing weather warnings for hazardous events. Large sums of money are invested in research and development of forecasting systems; supercomputing resources and in observing networks. For example in 2018-19 the UK Met Office invested £55 million in satellite programmes. International cooperation between weather services means that weather data obtained by one country are usually distributed to others through the World Meteorological Organisation (WMO) Global Transmission Service (GTS) in near real time, and for the common good. Is this all about to change?

In the future there is likely to be an increasing need for smart, local forecasts for the safety of autonomous vehicles (e.g. to allow the vehicle to respond to rain, snow, ice etc). Such vehicles also provide an observing platform able to take local measurements of weather that could be used to improve forecasts.  But who owns the data (the driver, the car owner, the car manufacturer…) and can it be distributed for the common good? Can the data be trusted? What about privacy concerns?

IBM weather infographic

Across the observation sector, access to space is getting less expensive. For example, depending on the specifications, a nanosatellite can be built and placed in orbit for 0.5 million euros.  Furthermore, industry is beginning to run its own numerical weather prediction models (e.g., IBM weather).  This means that there are a growing number of companies investing in earth observation and numerical weather prediction,  and wanting financial returns on their investments.

Do we need a new paradigm for weather prediction?

wCROWN: Workshop on Crowdsourced data in Numerical Weather Prediction

wCROWN: Workshop on Crowdsourced data in Numerical Weather Prediction

by Sarah Dance

On 4-5 December 2018, the Danish Meteorological Institute (DMI) is hosted a workshop on crowdsourced data in numerical weather prediction (NWP), attended by Joanne Waller and Sarah Dance from the DARE project.  DMI  hosted this workshop with two aims, 1) Gather experts on crowdsourced data focused on NWP, to start a network of people working on the subject and 2) producing a white paper directing the research community towards best practices and guidelines on the subject.

Presenters from the University of Washington (Seattle), University of Reading and several operational weather centres including the Met Office (UK), German Weather Service (DWD), Meteo France, ECMWF, KNMI and EUMETNET gave us status reports on their research into using crowdsourced data, opportunistic data and citizen science. We discussed the issues arising in the use of such data and agreed to write a workshop report together to feed into EUMETNET activities. We also enjoyed a fascinating tour of the DMI  operational forecasters centre.

Flooding from Intense Rainfall

Flooding from Intense Rainfall

Several members of the DARE team were involved in the  NERC Flooding from Intense Rainfall (FFIR) programme open event, held at the Royal Society in London on 27 November 2018.

Dr Linda Speight, FFIR Policy and Impact officer wrote this overview of the event.

Over 3 million households are at risk of surface water flooding in the UK and this number is set to rise in the future. Surface water flood events happen quickly and affect small areas, the surrounding region may not see any rainfall at all. This makes them difficult to forecast.

Through the NERC funded Flooding from Intense Rainfall programme (FFIR), meteorologists, hydrologists, scientists, consultants and operational experts are working together to reduce the risks of damage and loss of life caused by surface water and flash floods.

The research includes everything from historic newspaper archives to drones and high speed computers. It has identified places vulnerable to flash flooding, developed new techniques for monitoring rivers during flood events, improved weather forecasts for intense rainfall and demonstrated the potential for real time simulation of floods in urban areas. Importantly the five year programme has helped improve communication between people in the hydrology and meteorological research communities. This will have lasting benefits into the future.

At the programme showcase event at the Royal Society in November 2018 there was a hands on opportunity to interact with the challenges of flooding from intense rainfall. Alongside presentations and an expert panel debate, participants could immerse themselves in a virtual reality simulation of a flash flood, watch convective rainfall develop on a giant forecast globe and share their thoughts on the modelling and communication chains that underpin flood forecasting.

A short video about the programme is available here

 

Or you can find out more details at http://blogs.reading.ac.uk/flooding/

Dr Sam Illingworth from Manchester Metropolitan University responded to the event with poetry:

 

After the Flood

 

When I thought of floods

I thought of the heavens breaking forth

In biblical proportions.

Forty days and forty nights of rain.

I thought of Boxing Day 2015;

The pain in my left hand as I scooped

Dirty water out of my in-law’s outhouse

Using nothing

More than a gravy boat and lashings

Of dampened Christmas spirit.

 

When I thought of floods

I thought about days of sustained rainfall.

It never even crossed my mind that Surface water flooding

Or thunderstorms could decimate the land

In hours;

Not days.

 

When I thought of floods

I thought about rain gauges and sandbags;

I didn’t think about how convective events form,

How soil moisture could be used to forecast flow,

Or how our future of flood defence

Could ever be bound to our arid past.

To my great shame I did not even consider:

The conditioning of least-squares problems in variational data assimilation.

 

When I thought of floods

I thought of observations;

Of closing the floodgates after the horse had bolted.

Observations that masked an inevitable inability

To adapt to our environments.

I thought of shattered communities,

Broken apart not just by the unrelenting force of the rising waters

But by the isolation and helplessness of being

Told to sit and wait in silence

For the cavalry to arrive.

 

But now….

Now when I think of floods

I think of our improved knowledge of catchment susceptibility,

And how this will help decision makers

Identify locations at risk of flooding.

I think of being able to forecast a flood event in real time,

And how this will enable better decision making and communication.

 

But most of all I think about people.

Of end-to-end-forecasting, knowledge sharing, and upstream engagement.

I think about how flood chronologies can

Provide a powerful data set

To develop storylines around flood histories;

Histories which can be used to engage local communities.

And how these communities can not only learn

To be resilient,

But can help to build the resilience

That we need;

To stop us all

From being washed away.

 

 

Our first DARE workshop

Our first DARE workshop

by Sarah Dance

Workshop participants

The DARE team organised a workshop on data science for high impact weather and flood prediction, held by the river at the lovely University of Reading Greenlands Campus in Henley-on-Thames, 20-22 Nov 2017. The workshop objectives were to enable discussion and exchange of expertise at the boundary between digital technology, data science and environmental hazard modelling, including

  • Data assimilation and data science for flood forecasting and risk planning
  • Data assimilation and data science for high impact weather forecasting
  • Smart decision making using environmental data

The meeting was attended by over 30 participants from  5 different countries. We had some great presentations ( to be made available on this webpage) and discussion. We came up with some recommendations to help promote and deliver research and business applications in the digital technology-environmental hazard area. We plan to write a meeting report detailing these recommendations that we hope will be published in a peer-reviewed international journal.  Watch this space!

 

What’s in a number?

What’s in a number?

By Nancy Nichols

Should you care about the numerical accuracy of your computer?  After all, most machines now retain about 16 digits of accuracy, but usually only about 3 – 4 figures of accuracy are needed for most applications;  so what’s the worry?   To demonstrate, there have been a number of spectacular disasters due to numerical rounding error.  One of the most well known is the failure of a Patriot Missile to track and intercept an Iraqi Scud missile in Dharan, Saudi Arabia, on February 25, 1991, resulting in the deaths of 28 American soldiers.

The failure was ultimately attributable to poor handling of  rounding errors.  The computer doing the tracking calculations had an internal clock whose values were truncated when converted to floating-point arithmetic with an error of about 2-20 .   The clock had run up a time of 100 hours, so the calculated elapsed time was too long by 2-20 x 100 hours = 0.3433 seconds, during which time a Scud would be expected to travel more than half a kilometer.

 

(See The Patriot Missile Failure)

The same problem arises in other algorithms that accumulate and magnify small round-off errors due to the finite (inexact) representation of numbers in the computer.   Algorithms of this kind are referred to as ‘unstable’ methods.  Many numerical schemes for solving differential equations have been shown to magnify small numerical errors.  It is known, for example, that L.F. Richardson’s original attempts at numerical weather forecasting were essentially scuppered due the unstable methods that were used to compute the atmospheric flow.   Much time and effort have now been invested in developing and carefully coding methods for solving algebraic and differential equations such as to guarantee stability.   Excellent software is publicly available.  Academics and operational weather forecasting centres in the UK have been at the forefront of this research.

Even with stable algorithms, however, it may not be possible to compute an accurate solution to a given problem.   The reason is that the solution may be sensitive to small errors  –  that is, a small error in the data describing the problem causes large changes in the solution.  Such problems are called ‘ill-conditioned’.   Even entering the data of a problem into a computer  –  for example, the initial conditions for a differential equation or the matrix elements of an eigenvalue problem  –   must introduce small numerical errors in the data.  If the problem is ill-conditioned, these then lead to large changes in the computed solution, which no method can prevent.

So how do you know if your problem is sensitive to small perturbations in the data?  Careful analysis can reveal the issue, but for some classes of problems there are measures of the sensitivity, or the ‘conditioning’, of the problem that can be used.   For example, it can be shown that small perturbations in a matrix can lead to large relative changes in the inverse of the matrix if the ‘condition number’ of the matrix is large.  The condition number is measured as the product of the norm of the matrix and the norm of its inverse.  Similarly  small changes in the elements of a matrix will cause its eigenvalues to have large errors if the ‘condition number’ of the matrix of eigenvectors is large.   Of course to determine the condition numbers is a problem implicitly, but accurate computational methods for estimating the condition numbers are available .

An example of an ill-conditioned matrix is the covariance matrix associated with a Gaussian distribution.   The following figure shows the condition number of a covariance matrix obtained by taking samples from a Gaussian correlation function at 500 points, using a step size of 0.1, for varying length-scales [1].The condition number increases rapidly to 107 for length-scales of only size  L = 0.2  and, for length scales larger than 0.28, the condition number is larger than the computer precision and cannot even be calculated accurately.

This result is surprising and very significant for numerical weather prediction (NWP) as the inverse of covariance matrices are used to weight the uncertainty in the model forecast and in the observations used in the analysis phase of weather prediction.  The analysis is achieved by the process of data assimilation, which combines a forecast from a computational model of the atmosphere with physical observations obtained from in situ and remote sensing instruments.  If the weighting matrices are ill-conditioned, then the assimilation problem becomes ill-conditioned also, making it difficult to get an accurate analysis and subsequently a good forecast [2].  Furthermore, the worse the conditioning of the assimilation problem becomes, the more time it takes to do the analysis. This is important as the forecast needs to be done in ‘real’ time, so the analysis needs to be done as quickly as possible.

One way to deal with an ill-conditioned system is to rearrange the problem to so as to reduce the conditioning whilst retaining the same solution.  A technique for achieving this is to ‘precondition’ the problem using a transformation of the variables.  This is used regularly in NWP operational centres with the aim of ensuring that the uncertainties in the transformed variables all have a variance of one [1][2].  In this table we can see the effects of the length-scale of the error correlations in a data assimilation system on the number of iterations it takes to solve the problem, with and without preconditioning of the problem [1].  The conditioning of the problem is improved and the work needed to solve the problem is significantly reduced.  So checking and controlling the conditioning of a computational problem is always important!

[1]  S.A Haben. 2011. Conditioning and Preconditioning of the Minimisation Problem in

Variational Data Assimilation, University of Reading, Department of Mathematics and Statistics, Haben PhD Thesis

[2]  S.A. Haben, A.S. Lawless and N.K. Nichols.  2011. Conditioning of incremental variational data assimilation, with application to the Met Office system, Tellus, 63A, 782–792. (doi:10.1111/j.1600-0870.2011.00527.x)

WMO Symposium on Data Assimilation

WMO Symposium on Data Assimilation

by Amos Lawless

In the middle of September scientists from all round the world converged on a holiday resort in Florianopolis, Brazil for the Seventh World Meteorological Organization Symposium on Data Assimilation. This symposium takes place roughly every four years and brings together data assimilation scientists from operational weather and ocean forecasting centres, research institutes and universities. With 75 talks and four poster sessions, there was a lot of science to fit in to the four and a half days spent there.

 

The first day began with presentation of current plans by various operational centres, and both similarities and differences became apparent. It is clear that many centres are moving towards data assimilation schemes that are a mixture of variational and ensemble methods, but the best way of doing this is far from certain. This was apparent from just the first two talks, in which the Met Office and Meteo-France set out their different strategies for the next few years. For anyone who thought that science always provides clear-cut answers, here was an example of where the jury is still out! Many other talks covered similar topics, including the best way to get information from small samples of ensemble forecasts in large systems.

 

In a short blog post such as this, it is impossible to discuss the wide range of topics that were discussed in the rest of the week, ranging from theoretical aspects of data assimilation to practical implementations. Subjects included challenges for data assimilation at convective scales in the atmosphere, ocean data assimilation, assimilation of new observation types (including winds from radar observations of insects, lightning and radiances from new satellite instruments) and measuring the impact of observations. Several talks proposed development of new, advanced data assimilation methods – particle filters, Gaussian filtering and a hierarchical Bayes filter were all covered. Of particular interest was a presentation on data assimilation using neural networks, which achieved comparable results to an ensemble Kalman filter at a small fraction of the computational cost. This led to a long discussion at the end of the day as to whether neural networks may be a way forward for data assimilation. The final session on the last day covered a variety of different applications of data assimilation, including assimilation of soil moisture, atmospheric composition measurements and volcanic ash concentration, as well as application to coupled atmosphere-ocean models and to carbon flux inversion.

 

Outside the scientific programme the coffee breaks (with mountains of Brazilian cheese bread provided!) and the social events, such as the caipirinha tasting evening and the conference dinner, as well as the fact of having all meals together, provided ample opportunity for discussion with old acquaintances and new. I came home excited about the wide range of work being done on data assimilation throughout the world and enthusiastic to continue tackling some of the challenges in our research in Reading.

The full programme with abstracts is available at the conference web site, where presentation slides will also be eventually uploaded:

http://www.cptec.inpe.br/das2017/

Serving society with better weather and climate information.

Serving society with better weather and climate information.

by Sarah Dance

I have just come back from the European Meteorological Society 2017 conference in Dublin, where I was co-convenor for a session on Data Assimilation. It’s theme was Serving Society with better Weather and Climate Information. A key challenge for the meteorological communities is how best to harness the wealth of data now available – both observational and modelled – to generate and communicate effectively relevant, tailored and timely information ensuring the highest quality support to users’ decision-making.  The conference produced some highlight videos that sum up the activities better than I could!

Can cars provide high quality temperature observations for use in weather forecasting?

Can cars provide high quality temperature observations for use in weather forecasting?

By Diego de Pablos

I am an Undergraduate student in the University of Reading that has recently finished his UROP placement (Undergraduate Research Opportunities Programme) in Reading University, this project was funded by the University and was in partnership with the Met Office. Since I am currently undertaking the Environmental Physics course at the Meteorology department, this project was of interest to me for two reasons: first, I plan on getting a PhD at Reading University and wanted to have a feel for that experience and secondly, the research topic seemed to have potential to improve weather forecasting and road safety overall. The project consisted on having a first look at the temperature observations from the built-in thermometer of a car, and compare them with the UKV model surface temperatures and nearby WOW [1] sites observations.

Even though the use of vehicles in weather forecasting has been studied before [2], advanced thermometers were installed on the vehicles to get the observations in most cases, or other parameters were used (i.e antilock brakes or windshield wipers states). This project aimed to assess the potential of the native ambient air temperature sensor most modern cars (less than ten years old) have. Having these observations available when predicting the road state in the nearby future.

A series of days of temperature observations registered by a car’s built-in thermometer were studied. The method used to extract these observed temperatures was an OBD dongle, which would be connected to the car’s engine management system via the standard OBD port cars have installed behind the steering wheel. The dongle would then send this information to the driver’s phone via Bluetooth. In the phone app, observations and other parameters available from the dongle are decrypted, and are later sent to a selected URL via 3G/4G connections. The data would then be stored in metdb, the database used by the Met Office in the UK, and made available for forecasting.

 

The trial showed a need for further testing regarding the thermometers, as it was suggested that the sensor readings could have a bias with height and speed. However, the potential availability of data, by sheer quantity alone is outstanding, as around 20 million cars would be available to take part in the data collection in the UK.

All in all, using car sensors for weather forecasting seems to have potential and will be studied thoroughly in the near future, to hopefully tie its advancements with those of car technologies.

References:

[1] Weather Observations Website – Met Office. https://wow.metoffice.gov.uk/. Accessed: 10th of August 2017.

[2] William P. Mahoney III and James M. O’Sullivan. “Realizing the Potential of Vehicle-Based Observations”. In: Bulletin of the American Meteorological Society 94.7 (2013), pp. 1007– 1018. doi: 10.1175/BAMS-D-12-00044.1. url: https://doi.org/10.1175/BAMS-D-12-00044.1

UFMRM WG Webinar: “DARE to use CCTV images to improve urban flood forecasts”

It is difficult to accurately predict urban floods; there are many sources of error in urban flood forecast due to unknown model physics, computational limits, input data accuracy etc. However, many sources of model and input errors can be reduced through the use of data assimilation methods – mathematical techniques that combine model predictions with observations to produce more accurate forecast.

In this talk I will motivate and introduce the idea of using CCTV images as a new and valuable additional source of information in cities for improving the urban flood predictions through data assimilation methods. This work is part of the Data Assimilation for REsilient City (DARE) project.

You can see the whole presentation on YouTube here or view slides here.