DARE pilot project with participants: Professor Lee Chapman, University of Birmingham; Sytse Koopmans; Gert-Jan Steeneveld, Wageningen University & Research (WUR)
The research will be a verification by ground truth observations of the Birmingham Urban Climate Lab (BUCL) observational network.
The worktask aims are:
Setting up basic model run with WRF with meteorological boundaries from ECMWF
Remapping land use from CORINE land cover inventory (100 m) to USGS classification used by WRF
Calculate land use fraction by Landsat 8 satellite
Derived urban morphology indicators with the NUDAPT tool
For the project we completed the model infrastructure without data assimilation and a basic run without was conducted accordingly. The main effort done in the last year was to create a geographical dataset that satisfy the 200 meter resolution. Before applying data assimilation it is important to describe the land use and urban characteristics as well as possible.
The default available land use datasets were too course for a 200m run. We have remapped the 100m Corine land use dataset to a format which can be read by WRF.
The urban morphology has been improved by applying the National Urban Database and Access Portal Tool (NUDAPT) (Ching et al, 2009*). With this step we progressed the representation of urban morphology in the WRF model compared to previous studies.
NUDAPT has been tested against more elementary and basic representations in WRF in a study for a Chinese city. The performance of NUDAPT looks promising.
Image 1 – Mean building height Birmingham
Image 2: Land use fraction Birmingham
*Ching J, Brown M, McPherson T, Burian S, Chen F, Cionco R, Hanna A, Hultgren T, Sailor D, Taha H, Williams D. 2009. National Urban Database and Access Portal Tool, NUDAPT. Bulletin of the American Meteorological Society 90( 8): 1157– 1168.
by Dr RobThompson and Prof Anthony Illingworth, Dept of Meteorology, University of Reading
The UK operational radar network has the potential to deliver real-time rainfall estimates every five minutes with a resolution of 1km2 over most of the populated areas of the UK. If these rain rates were accurate then such data would have a major impact on the ability to predict short term ‘flash’ flooding events so that mitigating action can be taken. However, at present, for flood warnings, the accuracy is deemed insufficient, so the radar rainfall estimates from each radar are continuously adjusted using recent observations by ground-based rain gauges.
The UK radar network has recently been renewed and upgraded to dual polarisation, resulting in much improved data quality. In this study we will compare the radar signal obtained every five minutes from the operational Dean Hill radar with the rain rate at the ground some 20km from the radar and just 400m below the radar beam and use this to validate and improve the retrieval algorithms that convert the radar return signal, or ‘reflectivity’ into a rain rate at the ground.
In the preparatory work for this DARE pilot project we have been comparing the radar reflectivity observed every five minutes with the scanning Dean Hill radar 20km distant from Chilbolton where we have five different high resolution rain gauges. The radar pulse samples a volume 300m by 300m by 600m and is at a height of 400m above the gauges (see image). One of these gauges measures the size distribution of the rain drops and so once we know the sizes and number of the drops we can calculate the radar reflectivity we would expect from the radar. Over the past two years we find close agreement, but there appears to be a slow drift in the radar calibration of about 60%. In collaboration with the Met Office we are trying to find the source of this drift. However, once we correct for this drift, we find that with the new radar and its improved data quality there is a close correspondence between the rain from the radar and that observed at the ground. This performance appears to be much better than previously obtained before the radars were upgraded.
This blog describes the work of Masters student Vasiliki Kouroupaki, carried out in collaboration with the UK Met Office.
In numerical weather prediction, nowcast, hindcast and forecast models can be improved through data assimilation. Data assimilation is the technique which combines observations with output from a previous short-range forecast (background) to produce an optimal estimate of the state of the atmosphere (analysis).
Radar reflectivity observations are assimilated by the Met Office in order to provide up-to-date information about rainfall in the initial conditions for UK weather forecasts. In assimilation, observations are assigned weights according to their error statistics. Depending on the kind of observation, there are different factors or processes which can result in errors. In order to have an optimal analysis these errors must be correctly specified. However, due to the fact that the true errors are not known their statistics need to be estimated. In this work, the uncertainties of radar reflectivity observations assimilated into the Met Office UKV model are examined using a diagnostic technique. Data come from the operational UKV model with hourly cycling 4D-VAR or from trial experiments four times per day. The diagnostic is based on combinations of observation-minus- background, observation-minus-analysis and background-minus-analysis differences. The results show that observation error variances are higher for Winter (1 Dec 2017-18 Jan 2018) than for Summer (16 Jul-16 Aug 2018) and that they increase for higher reflectivity values. Further investigations classified the data by beam elevation and by radar ID. These showed that for values of beam elevation between 0.5- 1.0 and 3.0-4.0 degrees the error variance had greater values. Also, error statistics for different radars were positively correlated with the mean reflectivity observed by each radar.
Further investigation of observation error statistics in the assimilation could improve the initial conditions and thereby operational forecasts for convective rainfall events.
Most countries have a national weather service, funded by the government. A key role is to provide a public weather service, publishing forecasts to help make everyday business and personal decisions, as well as providing weather warnings for hazardous events. Large sums of money are invested in research and development of forecasting systems; supercomputing resources and in observing networks. For example in 2018-19 the UK Met Office invested £55 million in satellite programmes. International cooperation between weather services means that weather data obtained by one country are usually distributed to others through the World Meteorological Organisation (WMO) Global Transmission Service (GTS) in near real time, and for the common good. Is this all about to change?
In the future there is likely to be an increasing need for smart, local forecasts for the safety of autonomous vehicles (e.g. to allow the vehicle to respond to rain, snow, ice etc). Such vehicles also provide an observing platform able to take local measurements of weather that could be used to improve forecasts. But who owns the data (the driver, the car owner, the car manufacturer…) and can it be distributed for the common good? Can the data be trusted? What about privacy concerns?
Across the observation sector, access to space is getting less expensive. For example, depending on the specifications, a nanosatellite can be built and placed in orbit for 0.5 million euros. Furthermore, industry is beginning to run its own numerical weather prediction models (e.g., IBM weather). This means that there are a growing number of companies investing in earth observation and numerical weather prediction, and wanting financial returns on their investments.
We have come across an illustrating source of a network of security cameras capturing a flash flood in Ellicott City, Maryland, US on Sunday 27th of May 2018. The video is a collage of 12 cameras all located on or near the Main Street in Ellicott City. The information from these videos would have been useful at the time of the flooding.
The videos clearly show how the Patapsco River and two out of its four tributaries (Tiber River and Hudson River) rapidly swell and overflow. We see how this results in the Main Street also becoming a fast flowing river with water washing away cars, destroying buildings, and accumulating debris. The flood lasted only four hours but caused catastrophic damage to local infrastructure, residents property, and claimed a life of a National Guardsman [1,4].
The cameras were installed by a local property owner, Ron Peters and can be seen in this YouTube video:
The flooding event
A storm released nearly two months of rain, over 9 inches (24cm) in just two hours (3 to 5pm local time), which swept away several roads, cars and brought more than 10 feet (3.0 m) of rapidly moving water down Main Street in the Old Ellicott City . The old city is a very urban area set in a valley next to Patapsco River and its four tributaries. Due to the urban landscape the rainfall has nowhere to go except for running down the valley to the main river.
This was the second 1-in-1000 flood event within two years in Ellicott City. Both 2016 and 2018 events claimed lives and caused millions of dollars in damage [1,4]. However, flooding is nothing new to this city. The city officials are looking into introducing green areas in the city to allow the rain water to be absorbed into the ground reducing the surface runoff.
New flood alert system
Associate Professor Nirmalya Roy and his group from University of Maryland Baltimore County (UMBC) are working on using a network of temperature and liquid sensors and have produced a new flood warning system for Ellicott City [2,4]. They are also working on using the local flood related information from social media such as Twitter into the flood alter algorithm which warns public through loudspeakers in the city [3,4].
It is clear that the videos captured from these security cameras provide a rich source of water information of the rivers and the Main Street. Information such as water levels and surface velocities can be extracted from these videos  and used as part of an existing flood warning system or an independent one. Further, videos also capture additional information on floods and damage caused that is valuable to rescue teams, insurance companies etc.
EUMETNET, is a grouping of 31 European National Meteorological Services which is provides a framework for collaboration between its members in the meteorological and hydrological fields. You can find out more about EUMETNET and its missions here.
ON 12-13th of March 2019 EUMETNET held a Crowd Sourcing Workshop at the Met Office in Exeter, UK, which a number of us from DARE attended remotely. The workshop was attended by a large number of the EUMETNET members both in person and remotely. The topics covered included:
description of the Met Office observation network (WOW) and their move to a cloud based data system;
the existing crowd sourcing platforms across the Europe (see the list below);
summaries of other recent crowd sourcing meetings/workshops;
data related issues such as data format, quality control, storage, data sources and legal issues.
From the various talks it was clear that there is a great opportunity for crowdsourced data to contribute to meteorological forecast accuracy, in particular to nowcasting and more timely warnings since observations had higher spatial resolution in the established crowd sourcing platforms. However, the collected data types and their quality control varied greatly between the various crowdsourcing platforms discussed. For example, e.g. AEMET, concentrate on collecting singular atmospheric observations which are characterised by being local, rare, of significant intensity and with the capacity to cause high social impact. While, the Met Office Weather Observation Website (WOW) system accepts all types of observations from various types of sources.
Distinction between crowd sourced and opportunity data, such as smartphone pressure measurements, car temperature data, aircraft data was also made. Leading to an important discussion on collection of such data which often requires collaboration with industry e.g. car manufacture, mobile network provider etc. Should companies release such opportunity data for the mutual benefit through improved forecasts and warning systems? A number of companies already release their data, however, these are exceptions rather than the norm currently. There is no coherent agreement on how this would be uniformly achieved in practice which remains an open question for the debate.
List of various crowd sourcing platforms discussed at the workshop:
Last week, on 22-23 October 2018, around 230 scientists from the three ocean and climate related clusters of excellence in northern Germany met in Berlin in the joint conference on Ocean – Climate – Sustainability Research Frontiers. The participants brought in lively discussions within the context of scientific and societal action towards ocean and climate research. Apart from the discussions more oriented toward the basic climate science and technical aspects, from a personal standpoint (perhaps because of its distance from my own work), I found a number of presentations from “The Future Ocean” cluster in Kiel, which include scholars from politics, social science, philosophy and international law most interesting. Some of these presentations offered a window on the connection between climate change and global and local politics in countries (e.g.; as tropical islands in the Indian ocean, who generally rely on external aid) most affected by increasing sea levels and coastal erosion. In common, this class of talks indicated a need for improving the communication of climate and natural risk science to society. Actually, a huge component of the unpredictability in future climate projections comes from the societal component.
However, as analyzed in one talk in the conference, it seems that, ultimately, public opinion is mostly driven by what is shown on TV, and TV, public offer is in turn mostly driven by the economic powers. Thus, as described the writer Jose Luis Sampedro more than 6 years ago, “public opinion” (defined in Wikipedia as consisting of the “desires, wants, and thinking of the majority of the people”), is in reality the “opinion of the media” or the “opinion of the economic powers”. This clearly connects to the results of Brazil elections just yesterday and the new presidency, and so to the derived very uncertain future of the Amazon management. Apart from the risks to biodiversity, a further deforestation of the Amazon rainforest would make it impossible to cut carbon pollution and the aspirational target of no more than 1.5ºC global warming above pre-industrial temperatures set in the Paris climate agreement. Brazilian people (and they are not alone) seem either oblivious to the problem or convinced that they are not affected by it (even, as from a friend’s personal communication last week, it appears that some people in Brazil sadly believe climate change is an European hoax to take control over their rainforest). Generally rising sea levels and increased storm surge risks, as well as the extra energy accumulated in the Earth system in general (and ocean in particular, boosting atmospheric convection and associated flood risks), will surely lead to a further demand of online, continuously updated, risk information to face emergency situations in the future city. One can wish the best for Brazil and the Amazon, which is the best for the world. In any case, let’s hope that Copacabana is not swallowed in the sea before Rio is transformed into a resilient city.
Imagine a world where it is possible to accurately predict the weather, climate, storms, tsunami and other computational intensive problems in real time from your laptop or even mobile phone – if one has access to a supercomputer then to be able to predict at unprecedented scales/detail. This is the long term aim of our work on Data Assimilation with Machine Learning at the Data Science Institute (Imperial College London, UK) and as such, we believe, it will be a key component of future Numerical Forecasting systems.
We proved that the integration of machine learning with Data assimilation can increase the reliability of prediction, reducing errors by including information with an actual physical meaning from observed data. The resulting cohesion of machine learning and data assimilation is then blended in a future generation of fast and more accurate predictive models. This integration is based on the idea of using machine learning to learn the past experiences of an assimilation process. This follows the principle of Bayesian approach.
Edward Norton Lorenz stated “small causes can have larger effects”, the so called butterfly effect. Imagine a world where it is possible to catch “small causes” in real time and predict effects in real time as well. To know to act! A world where science works with continuously learning from observation.
Figure 1. Comparison of the Lorenz system trajectories obtained by the use of Data Assimilation (DA) and by the integration of machine learning with Data assimilation (DA+NN)
The Dare team went on a field trip last month! It was a well planned and executed trip – as you would expect from a group of mathematicians. It was also a very interesting trip for us since most of us have only ever used data (e.g. for improving forecasts) not collected it. Even better Tewkesbury area has become a sort of benchmark for testing new data assimilation methods, ideas, tools, observations, etc, and so many of us have worked with LisFlood numerical model (developed by a team led by Prof. Paul Bates at the University of Bristol) over the Tewkesbury domain. We have seen the river runs in the model outputs, watched the rivers Avon and Severn go out of banks in our plots, and investigated various SAR images of the area but we have never been to the area. We generally do not need to visit the area when working with the models, however, now that there was a chance to do so, it was no surprise that many of us were keen to go. And we did go like ‘d’ A-team:
However we had a more important reason for visiting too – we were going to the Tewkesbury area to collect metadata from a number of river cameras located near Tewkesbury town. These river cameras are high definition webcams owned and serviced by Farson Digital Ltd in various location over the UK. We had recently discovered that six of such cameras are within the LisFlood model domain and have captured the November 2012 floods in the area. With the permission from the Farson Digital Ltd, we have obtained hourly daylight images of the floods from 21st November 2012 to 5th of December 2012. Hence, the aim our trip was to obtain accurate (with errors of no more than few centimeters) positional information (i.e. latitude, longitude, height) of the cameras themselves as well as the positional information of a number of markers in the images for each of the cameras. We need this information to extract as accurately as possible water extents and water depth from these images using image processing tools (which we are currently working on).
To take these measurements we had borrowed some tools from the Department of Geography at the University of Reading. We used a differential GPS tool (GNSS) to very accurately (on order of few centimeters) measured the position of a given point in 3D space, that is its latitude, longitude, and height above the sea level, however, it had to be used on the ground (e.g. could not measure remote or high points such as building corners where some cameras were mounted) and not be too close to buildings or large trees. To measure remote and high points we used Total Station, which allowed us to shoot a laser beam to the desired point to measure its 3D position in space.
We had planned to visit all six cameras within the space of the two days 16th and17th of April, however, despite our best plans and fantastic organisation skills we were too ambitious with our time and we had to drop the camera furthest from our base – the Bewdley camera (see map with camera positions in figure 2). Thus, on our first day, we took measurements from Wyre Piddle, Evesham, and Digglis Lock cameras, spotting ourselves live on the Farson Digital Ltd site.
We returned to our base – the Tewkesbury Park Hotel, to be joined by the Ensemble team from the Lancaster University. Ensemble Project is lead by Prof. Gordon Blair, and as Dare is funded by the EPSRC Senior Fellowship in Digital Technology for Living with Environmental Change. It was very interesting to meet the Ensemble project team and learn more in-depth about their work, future interests, and scope for the collaboration.
On our second day, the Dare team visited the Tewkesbury camera while the Ensemble team learned more about the purpouse of the data collection and the Novermber 2012 floods in the area. Then we all jointly measured a large number of points at the Strensham Lock. In 2012 we all would have been totally sumberged in water in this picture since the flood waters completely swallowed the island on which the house is standing flooding the building along with it.
Our grand finale was the meeting with the director of the Farson Digital Ltd Glyn Howells as well as a number of stakeholders who have commissioned the cameras we visited. It was very interesting for us to learn how the network of the river cameras was born from the need to know and understand the current state of the river for a variety of river users – fishermen, campers, boaters, etc. Also, how these cameras have become invaluable assets to many stakeholders for various reasons – greatly reducing the number of river condition related phone enquiries, monitoring river bank and bridge conditions, and so on.
Now a month later we have downloaded and processed the data we collected from these stations. In figure 7 we have plotted the data points we took at the Tewkesbury site, owned both by the Environmental Agency and Tewkesbury Marina (both of which we greatly thank for their support and assistance before and during our trip, especially to Steve Edgar from EA and Simon Amos and Bruno from Tewkesbury Marina). In the figure, the red dots are the camera positions – pre-2016 and current camera positions, and the black dots are all the other measurements we took using both the TotalStation and GNSS tools, which are plotted against the Environmental Agency lidar data with 1m horizontal resolution.
We are currently working on extracting the water extent from these images which we then will use to produce water depth observations. Our final aim is to see how much forecast improvement such rich source of observations offer, in particular, before the rising limb of the flood.
We are very thankful to Glyn Howells and the various stakeholders for permitting us to use of the images, allowing us to take the necessary measurements, assisting us on the sites, and joining at the workshop!
The DARE team organised a workshop on data science for high impact weather and flood prediction, held by the river at the lovely University of Reading Greenlands Campus in Henley-on-Thames, 20-22 Nov 2017. The workshop objectives were to enable discussion and exchange of expertise at the boundary between digital technology, data science and environmental hazard modelling, including
Data assimilation and data science for flood forecasting and risk planning
Data assimilation and data science for high impact weather forecasting
Smart decision making using environmental data
The meeting was attended by over 30 participants from 5 different countries. We had some great presentations ( to be made available on this webpage) and discussion. We came up with some recommendations to help promote and deliver research and business applications in the digital technology-environmental hazard area. We plan to write a meeting report detailing these recommendations that we hope will be published in a peer-reviewed international journal. Watch this space!