Machine Learning for Landmark Detection in Biomedical Applications

I am Rémy Vandaele, a new post-doc who started working for the DARE project this January 2020. I got a PhD degree in Computer Sciences from the University of Liège in Belgium. I am a researcher who works in the area of machine learning for computer vision. In this blog post, I will summarize the work I did during my PhD thesis, entitled « Machine Learning for Landmark Detection in Biomedical Applications ».

Figure 1 : 25 landmarks annotated on the image of a zebrafish larvae

The starting point of my thesis was about developing and comparing automated landmark detection methods. In this work, a landmark corresponds to a specific anatomic location on the image of a body (cf Fig. 1). It is defined by a type (tip of the nose, left corner of the left eye,…) and has coordinates. Detecting a landmark on a new image means finding its coordinates, given its type. In my work, I focused on landmark detection for two biomedical applications :

– Morphometry : biologists need to annotate landmarks on high resolution microscopy images of different types of bodies to understand the impact of their different experiments (e.g finding if the injection of a drug has impacted the size of a fish,…). Manual annotation is a long, tedious and error-prone process that limits the size of the experiments. Automation is severely needed in this area.

– CT-CBCT registration : in radiotherapy, a treatment plan (beam positioning & dosage) for all the sessions to come are computed on an initial, high resolution CT-scan. At each of the session, the patient is positioned on the table and needs to be positioned accordingly to its initial CT-scan. This is performed using a CBCT scan that is then registered to the CT-scan using the deformation computed between corresponding, manually annotated landmarks (see Fig. 2). Landmark positioning on 3D CT-scan is a slow process, that could be sped-up using automation.

Figure 2: How to register images using landmark detection

My work on this topic can be divided into three parts :

  1. I developed a 2D method based on binary image patch classification using a Random Forest [1] algorithm. During this work [2],[3],[4] , I was able to show (1) the relevance to use multi-resolution patch descriptors in order to increase the accuracy of the landmark detection algorithm (2) we could reduce the search zone of the landmark, and thus greatly increase the speed of detection, without losing accuracy. Those algorithms were implemented into an open-source client-server application for the analysis and sharing of large (bio)images, Cytomine [5] [6].
  2. I expanded this method to 3D in order to show that we could use my method for 3D CT to CBCT image registration [7].
  3. I studied the impact of state-of-the-art post-processing methods [8] (methods taking into account the relative position between the detected landmarks to refine their locations) in the context of biomedical applications, where the number of images and landmarks is significantly lower than in typical face detection applications where landmark detection is widely used. In this context, I showed that those methods brought little to no increase in detection accuracy, and proposed a new post-processing method suited for landmark detection in biomedical applications.

Future perspectives regarding this work will consider deep learning classification and regression models in combination with data augmentation approaches. This research is currently pursued at the University of Liège.

References

[1] Geurts, Pierre, et al.  “Extremely randomized trees.” Machine learning 63.1 (2006): 3-42.

[2] Vandaele, Rémy, et al. “Automatic cephalometric x-ray landmark detection challenge 2014: A tree-based algorithm.” ISBI (2014).

[3] Wang, Ching-Wei, et al. “Evaluation and comparison of anatomical landmark detection methods for cephalometric x-ray images: a grand challenge.” IEEE transactions on medical imaging 34.9 (2015): 1890-1900.

[4] Vandaele, Rémy, et al. “Landmark detection in 2D bioimages for geometric morphometrics: a multi-resolution tree-based approach.” Scientific reports 8.1 (2018): 1-13.

[5] Marée, Raphaël, et al. “Collaborative analysis of multi-gigapixel imaging data using Cytomine.” Bioinformatics 32.9 (2016): 1395-1401.

[6] Rubens, Ulysse, et al. “BIAFLOWS: A collaborative framework to benchmark bioimage analysis workflows.” bioRxiv (2019): 707489.

[7] Vandaele, Rémy, et al. “Automated landmark detection for rigid registration between the simulation CT and the treatment CBCT.” Acta Stereologica (2015).

[8] Vandaele, Rémy. “Machine Learning for Landmark Detection in Biomedical Applications.” Diss. Université de Liège,​ Liège,​​ Belgique, 2018.

Data-assimilation of crowdsourced weather stations for urban heat and water studies in Birmingham as a testbed

Data-assimilation of crowdsourced weather stations for urban heat and water studies in Birmingham as a testbed

DARE pilot project with participants:  Professor Lee Chapman, University of Birmingham;  Sytse Koopmans; Gert-Jan Steeneveld, Wageningen University & Research (WUR)

The research will be a verification by ground truth observations of the Birmingham Urban Climate Lab (BUCL) observational network.

The worktask aims are:

  • Setting up basic model run with WRF with meteorological boundaries from ECMWF
  • Remapping land use from CORINE land cover inventory (100 m) to USGS classification used by WRF
  • Calculate land use fraction by Landsat 8 satellite
  • Derived urban morphology indicators with the NUDAPT tool

For the project we completed the model infrastructure without data assimilation and a basic run without was conducted accordingly. The main effort done in the last year was to create a geographical dataset that satisfy the 200 meter resolution. Before applying data assimilation it is important to describe the land use and urban characteristics as well as possible.

  • The default available land use datasets were too course for a 200m run. We have remapped the 100m Corine land use dataset to a format which can be read by WRF.
  • The urban morphology has been improved by applying the National Urban Database and Access Portal Tool (NUDAPT) (Ching et al, 2009*). With this step we progressed the representation of urban morphology in the WRF model compared to previous studies.

NUDAPT has been tested against more elementary and basic representations in WRF in a study for a Chinese city. The performance of NUDAPT looks promising.

Image 1 – Mean building height Birmingham

 

Image 2: Land use fraction Birmingham

*Ching J, Brown M, McPherson T, Burian S, Chen F, Cionco R, Hanna A, Hultgren T, Sailor D, Taha H, Williams D. 2009. National Urban Database and Access Portal Tool, NUDAPT. Bulletin of the American Meteorological Society 90( 8): 1157– 1168.

A Toolkit for Community-Based Flood Monitoring

A Toolkit for Community-Based Flood Monitoring

by Elizabeth Lewis, Geoff Parkin Tessa Gough, Newcastle University

This DARE pilot  project addresses the Digital Technology/Living with Environmental Change Interface by supporting communities at risk of flooding to utilise emerging technology to play an active role in collecting and sharing data to aid flood response and to sensitize communities to climatic changes by providing an enhanced understanding of their environment.

The project is engaging with potential citizen scientists to use Private Automated Weather Stations (PAWS) to gather rain data and share those data to online platforms where they can be accessed by the public, forecasters, flood managers and researchers. Engagement activities are being run promote citizen science participation and to better understand the motivations and barriers to rain data collection.

Workshops have been hosted in Newcastle museums over the school holidays to share information on the risks of flash flooding, natural flood attenuation, checking rain data online, operating a weather station and interpreting rain gauge data. Games illustrating the key concepts were available; along with posters, demonstrations and discussion with participating climate scientists from Newcastle University. Over 800 people interacted with the games and demonstrations over 4 days. The cost of weather stations was cited as a reason for not collecting rain data, along with not knowing it was possible. The workshops raised awareness among people who had not previously considered participating. Participants were directed to a website (https://research.ncl.ac.uk/cspaws/)  and support email address that we have developed for this project to support citizen scientists with any issues around installing a weather station and contributing their data to the Met Office Weather Observations Website (https://wow.metoffice.gov.uk/).

A presentation was given to 42 teachers in the North East to encourage schools to get involved with the project. A workshop will be held with 25 children age 9 – 13 years old at a local school to look in detail at how weather stations work, and, the impacts of wild weather globally and locally. Further workshops with the schools are planned.

Images in order from left to right: Promotional Poster; Elizabeth Lewis at a workshop in a museum in Newcastle; A game at the workshops

Merging SAR-derived flood footprints with flood hazard maps for improved urban flood mapping

Merging SAR-derived flood footprints with flood hazard maps for improved urban flood mapping

Contributors: David Mason (University of Reading), John Bevington (JBA), Sarah Dance (University of Reading), Beatriz Revilla-Romero (JBA), Richard Smith (JBA), Sanita Vetra-Carvalho (University of Reading), Hannah Cloke (University of Reading).

This DARE pilot project is investigating a method of improving the accuracy of rapid post-event flood mapping in urban areas by merging pre-computed flood return period (FRP) maps with satellite synthetic aperture radar (SAR)-derived flood inundation maps. SAR sensors have the potential to detect flooding through cloud during both day- and night-time. The inputs are JBA’s Flood Foresight dynamic flood inundation extent and depth maps (updated every 3 hours), and a high resolution SAR image sequence. The SAR returns are used only in rural areas, including those adjacent to the urban areas, so that there is no need to take radar shadow and layover caused by buildings in urban areas into account. Also, rural SAR water level observations should be able to correct errors in model water elevations, because the JBA model thinks that all flooding is fluvial. On the other hand, it is an advantage to use the model’s FRP maps in urban areas, because these know where urban areas that are low are protected from flooding.

The project developed a method for detecting flooding in urban areas by merging near real-time SAR flood extents with model-derived FRP maps. The SAR flood detection is based on the fact that water generally appears dark in a SAR image. Urban areas that are protected (e.g. by embankments) have high return periods in the FRP maps, and their effective heights are correspondingly increased. The SAR water levels found in the rural areas are interpolated over the urban areas to take into account the fall-off of levels down the reach. The model waterline heights are similarly interpolated. The interpolated height maps from SAR and model are combined to a single map, which is used to estimate whether an urban pixel is flooded. The method was tested on urban flooding in West London in February 2014 (see image 3) and Tewkesbury in July 2007. It was compared to a previously-developed method that used SAR returns in both the rural and urban areas. The present method using SAR returns solely in rural areas gave an average flood detection accuracy of 94% and a false positive rate of 9% in the urban areas, and was more accurate than the previous method. A journal paper is in preparation.

 

Images: Urban flooding in West London in February 2014

Controlling and mitigating urban flooding with DA

Controlling and mitigating urban flooding with DA

by Prof Onno Bokhove and Tom Kent, PDRA,  University of Leeds
(The University of Leeds is a collaborator on the DARE project).

Motivated by the Boxing Day 2015 floods in Yorkshire (involving the Aire and Calder Rivers), we aim (i) to explore strategies of dynamic flood control and mitigation, and (ii) to assess and communicate flood-mitigation schemes in a concise and straightforward manner in order to assist decision-making for policy makers and inform the general public. To achieve our objectives, we are developing idealised observing system simulation experiments (OSSEs) using novel numerical models based on the Wetropolis flood demonstrator. Wetropolis is a physical model that provides a scientific testing environment for flood modelling, control and mitigation, and data assimilation, and has inspired numerous discussions with flood practitioners, policy makers and the public. Such discussions led us to revisit and refine a procedure that offers both a complementary diagnostic for classifying flood events (from gauge data and/or simulations) and a protocol to optimise the assessment of mitigation schemes via comprehensible cost-effectiveness analyses.

We have developed a protocol that revisits the concept of flood-excess volume (FEV). It is often difficult to grasp how much water is responsible for the damage caused by an extreme flood event, and how much of this floodwater can be mitigated by certain mitigation measures. Our protocol not only quantifies the magnitude of a flood but also establishes the cost-effectiveness of a suite of ‘grey’ engineering-based measures and ‘green’ nature-based solutions. Using river-level gauge data and mitigation schemes from the UK and French rivers, we demonstrate objectively the effectiveness of measures that can help stakeholders make decisions based on both technical and environmental criteria. The protocol should form a preliminary analysis, to be conducted prior to more detailed hydraulic modelling studies. In collaboration with colleagues from Univ. Grenoble, our work has been published in an international journal and further disseminated at numerous meetings and conferences. To date, it has contributed to the EU-funded NAIAD project through our colleagues in France and we are exploring future impact studies internationally. In our recently submitted article, a basic numerical model of Wetropolis is used to determine the relevant time and length scales prior to its construction as a physical model. We are developing the hydrodynamic modelling further, both mathematically and numerically, in order to conduct idealised experiments in flood control and mitigation.

Image  ‘FEV concept’

Presentations

  • Bokhove, O., Kelmanson, M. A., Kent, T., Piton, G., & Tacnet, J. M.: Using flood-excess volume to assess and communicate flood-mitigation schemes. EGU general assembly, Vienna, April 2019 (oral). Available online.
  • Bokhove, O., Kent, T., de Poot, H., & Zweers, W.: Wetropolis: models for education and water-management of floods and droughts. EGU general assembly, Vienna, April 2019 (poster). Available online.
  • Kent, T., Cantarello, L., Inverarity, G., Tobias, S.M., Bokhove, O. (2019): Idealized forecast-assimilation experiments and their relevance for convective-scale Numerical Weather Prediction. EGU general assembly, Vienna, April 2019 (oral). Available online.
  • Bokhove, O., Kelmanson, M. A., Kent, T., Piton, G., & Tacnet, J. M.: Public empowerment in flood mitigation, Flood & Coast conference, Telford, June 2019 (oral).
  • Bokhove participated in the ‘Landscape decisions’ program at the Isaac Newton Institute, Cambridge (July/August 2019). Web: https://www.newton.ac.uk/event/ebc
Investigation of the ability of the renewed UK operational weather radar network to provide accurate real-time rainfall estimates for improved flood warnings.

Investigation of the ability of the renewed UK operational weather radar network to provide accurate real-time rainfall estimates for improved flood warnings.

by Dr RobThompson and Prof Anthony Illingworth, Dept of Meteorology, University of Reading

The UK operational radar network has the potential to deliver real-time rainfall estimates every five minutes with a resolution of 1km2 over most of the populated areas of the UK. If these rain rates were accurate then such data would have a major impact on the ability to predict short term ‘flash’ flooding events so that mitigating action can be taken. However, at present, for flood warnings, the accuracy is deemed insufficient, so the radar rainfall estimates from each radar are continuously adjusted using recent observations by ground-based rain gauges.

The UK radar network has recently been renewed and upgraded to dual polarisation, resulting in much improved data quality. In this study we will compare the radar signal obtained every five minutes from  the operational Dean Hill radar with the rain rate at the ground some 20km from the radar and just 400m below the radar beam and use this to validate and improve the retrieval algorithms that convert the radar return signal, or ‘reflectivity’ into a rain rate at the ground.

In the preparatory work  for this DARE pilot project we have been comparing the radar reflectivity observed every five minutes with the scanning Dean Hill radar 20km distant from Chilbolton where we have five different high resolution rain gauges. The radar pulse samples a volume 300m by 300m by 600m and is at a height of 400m above the gauges (see image).  One of these gauges measures the size distribution of the rain drops and so once we know the sizes and number of the drops we can calculate the radar reflectivity we would expect from the radar. Over the past two years we find close agreement, but there appears to be a slow drift in the radar calibration of about 60%. In collaboration with the Met Office we are trying to find the source of this drift.  However, once we correct for this drift, we find that with the new radar and its improved data quality there is a close correspondence between the rain from the radar and that observed at the ground.  This performance appears to be much better than previously obtained before the radars were upgraded.

 

Image. Multiple rain gauges

Operational meteorology for the public good, or for profit?

Operational meteorology for the public good, or for profit?

Most countries have a national weather service, funded by the government. A key role is to provide a public weather service, publishing forecasts to help make everyday business and personal decisions, as well as providing weather warnings for hazardous events. Large sums of money are invested in research and development of forecasting systems; supercomputing resources and in observing networks. For example in 2018-19 the UK Met Office invested £55 million in satellite programmes. International cooperation between weather services means that weather data obtained by one country are usually distributed to others through the World Meteorological Organisation (WMO) Global Transmission Service (GTS) in near real time, and for the common good. Is this all about to change?

In the future there is likely to be an increasing need for smart, local forecasts for the safety of autonomous vehicles (e.g. to allow the vehicle to respond to rain, snow, ice etc). Such vehicles also provide an observing platform able to take local measurements of weather that could be used to improve forecasts.  But who owns the data (the driver, the car owner, the car manufacturer…) and can it be distributed for the common good? Can the data be trusted? What about privacy concerns?

IBM weather infographic

Across the observation sector, access to space is getting less expensive. For example, depending on the specifications, a nanosatellite can be built and placed in orbit for 0.5 million euros.  Furthermore, industry is beginning to run its own numerical weather prediction models (e.g., IBM weather).  This means that there are a growing number of companies investing in earth observation and numerical weather prediction,  and wanting financial returns on their investments.

Do we need a new paradigm for weather prediction?

Machine learning and data assimilation

Machine learning and data assimilation

by Rossella Arcucci

Imagine a world where it is possible to accurately predict the weather, climate, storms, tsunami and other computational intensive problems in real time from your laptop or even mobile phone – if one has access to a supercomputer then to be able to predict at unprecedented scales/detail. This is the long term aim of our work on Data Assimilation with Machine Learning at the Data Science Institute (Imperial College London, UK) and as such, we believe, it will be a key component of future Numerical Forecasting systems.

We proved that the integration of machine learning with Data assimilation can increase the reliability of prediction, reducing errors by including information with an actual physical meaning from observed data. The resulting cohesion of machine learning and data assimilation is then blended in a future generation of fast and more accurate predictive models. This integration is based on the idea of using machine learning to learn the past experiences of an  assimilation process. This follows the principle of Bayesian approach.

Edward Norton Lorenz stated “small causes can have larger effects”, the so called butterfly effect. Imagine a world where it is possible to catch “small causes” in real time and predict effects in real time as well. To know to act! A world where science works with continuously learning from observation.

Figure 1. Comparison of the Lorenz system trajectories obtained by the use of Data Assimilation (DA) and by the integration of machine learning with Data assimilation (DA+NN)

Using ‘flood-excess volume’ to quantify and communicate flood mitigation schemes

Using ‘flood-excess volume’ to quantify and communicate flood mitigation schemes

by Tom Kent

  1. Background

Urban flooding is a major hazard worldwide, brought about primarily by intense rainfall and exacerbated by the built environment we live in. Leeds and Yorkshire are no strangers when it comes to the devastation wreaked by such events.  The last decade alone has seen frequent flooding across the region, from the Calder Valley to the city of York, while the Boxing Day floods in 2015 inundated central Leeds with unprecedented river levels recorded along the Aire Valley. The River Aire originates in the Yorkshire Dales and flows roughly eastwards through Leeds before merging with the Ouse and Humber rivers and finally flowing into the North Sea. The Boxing Day flood resulted from record rainfall in the Aire catchment upstream of Leeds. To make matters worse, near-record rainfall in November meant that the catchment was severely saturated and prone to flooding in the event of more heavy rainfall. The ‘Leeds City Region flood review’ [1] subsequently reported the scale of the damage: Over 4,000 homes and almost 2,000 businesses were flooded with the economic cost to the City Region being over half a billion pounds, and the subsequent rise in river levels allowed little time for communities to prepare.”

The Boxing Day floods and the lack of public awareness around the science of flooding led to the idea and development of the flood-demonstrator ‘Wetropolis’ (see Onno Bokhove’s previous DARE blog post). Wetropolis is a tabletop model of an idealised catchment that illustrates how extreme hydroclimatic events can cause a city to flood due to peaks in groundwater and river levels following random intense rainfall, and in doing so conceptualises the science of flooding in a way that is accessible to and directly engages the public. It also provides a scientific testing environment for flood modelling, control and mitigation, and data assimilation, and has inspired numerous discussions with flood practitioners and policy makers.

These discussions led us in turn to reconsider and analyse river flow data as a basis for assessing and quantifying flood events and various potential and proposed flood-mitigation measures. Such measures are generally engineering-based (e.g., storage reservoirs, defence walls) or nature-based (e.g., tree planting and peat restoration, ‘leaky’ woody-debris dams); a suite of these different measures constitutes a catchment- or city-wide flood-mitigation scheme. We aim to communicate this analysis and resulting flood-mitigation assessment in a concise and straightforward manner in order to assist decision-making for policy makers (e.g., city councils and the Environment Agency) and inform the general public.

  1. River data analysis and ‘flood excess volume’

Rivers in the UK are monitored by a dense network of gauges that measure and record the river level (also known as water stage/depth) – typically every 15 minutes – at the gauge location. There are approximately 1500 gauging stations in total and the flow data are collated by the Environment Agency and freely available to download. Shoothill’s GaugeMap website (http://www.gaugemap.co.uk/) provides an excellent tool for visualising this data in real-time and browsing historic data in a user-friendly manner. Flood events are often characterised by their peak water level, i.e. the maximum water depth reached during the flood, and statistical return periods. However, this flood-peak conveys neither the duration or the volume of the flood, and the meaning of return period is often difficult to grasp for non-specialists. Here, we analyse river-level data from the Armley gauge station – located 2km upstream from Leeds city centre – and demonstrate the concept of ‘flood-excess volume’ as an alternative diagnostic for flood events.

The bottom-left panel of Figure 1 (it may help to tilt your head left!) shows the river level (h, in metres) as a function of time in days around Boxing Day 2015. The flood peaked at 5.21m overnight on the 26th/27th December, rising over 4m in just over 24 hours. Another quantity of interest in hydrology is the discharge (Q), or flow rate, the volume of water passing a location per second. This is usually not measured directly but can be determined via a rating curve, a site-specific empirical function Q = Q(h) that relates the water level to discharge. Each gauge station has its own rating curve which is documented and updated by the Environment Agency. The rating curve for Armley is plotted here in the top-left panel (solid curve) with the dashed line denoting its linear approximation; the shaded area represents the estimated error in the relationship, which is expected to grow considerably when in flood (i.e., for high values of h). Applying the rating curve to the river level data yields the discharge time series (top-right panel, called a hydrograph) for Armley. Note that the rating curve error means that the discharge time series has some uncertainty (grey shaded zone around the solid curve). We see that the peak discharge is 330-360m3/s, around 300m3/s higher than 24 hours previously. Since discharge is the volume of water per second, the area under the discharge curve is the total volume of water. To define the flood-excess volume, we introduce a threshold height hT above which flooding occurs. For this flood event, local knowledge and photographic evidence suggested that flooding commenced when river levels exceeded 3.9m, so here we choose the threshold hT = 3.9m. This is marked as a vertical dotted line on the left panels: following it up to the rating curve, one obtains a threshold discharge QT = Q(hT) = 219.1m3/s (horizontal dotted line). The flood-excess volume (FEV) is the blue shaded area between the discharge curve and the threshold discharge QT. Put simply, this is the volume of water that caused flooding, and therefore the volume of flood water one seeks to mitigate (i.e., reduce to zero) by the cumulative effect of various flood mitigation measures. The FEV, here around 9.34 million cubic metres, has a corresponding flood duration Tf = 32 hours, which is the time between the river level first exceeding hT and subsequently dropping below hT.  The rectangle represents the mean approximation to the FEV, which, in the absence of frequent flow data, can be used to estimate the FEV (blue shaded area) based on a mean water level (hm) and discharge (Qm).

  1. Using FEV in flood-mitigation assessment

Having defined FEV in this way, we are motivated by the following questions: (i) how can we articulate FEV (which is often many million cubic metres) in a more comprehensible manner? And (ii) what fraction of the FEV is reduced, and at what cost, by a particular flood-mitigation measure? Our simple yet powerful idea is to express the FEV as a 2-metre-deep square ‘flood-excess lake’ with side-length on the order of a kilometer. For example, we can break down the FEV for Armley as follows: 9.34Mm3 = (21502 x 2)m3, which is a 2-metre-deep lake with side-length 2.15km. This is immediately easier to visualise and goes some way to conveying the magnitude of the flood. Since the depth is shallow relative to the side-length, we can view this ‘flood-excess lake’ from above as a square and ask what fraction of this lake is accounted for by the potential storage capacity of flood-mitigation measures. The result is a graphical tool that (i) contextualises the magnitude of the flood relative to the river and its valley/catchment and (ii) facilitates quick and direct assessment of the contribution and value of various mitigation measures.

Figure 2 shows the Armley FEV as a 2m-deep ‘flood-excess lake’ (not to scale). Given the size of the lake as well as the geography of the river valley concerned, one can begin to make a ballpark estimate of the contribution and effectiveness of flood-plain enhancement for flood storage and other flood-mitigation measures. Superimposed on the bird’s-eye view of the lake in figure 3 are two scenarios from our hypothetical Leeds Flood Alleviation Scheme II (FASII+) that comprise: (S1) building flood walls and using a flood-water storage site at Calverley; and (S2) building (lower) flood walls and using a flood-water storage site at Rodley.

The available flood-storage volume is estimated to be 0.75Mm3 and 1.1Mm3 at Calverley and Rodley respectively, corresponding to 8% and 12% of the FEV. The absolute cost of each measure is incorporated, as well as the value (i.e., cost per 1% of FEV mitigated), while the overall contribution in terms of volume is simply the fraction of the lake covered by each measure. It is immediately evident that both schemes provide 100% mitigation and that (S1) provides better value (£0.75M/1% against £0.762M/1%). We can also see that although storage sites offer less value than building flood walls, a larger storage site allows lower flood walls to be built which may be an important factor for planning departments. In this case, although (S2) is more expensive overall, the Rodley storage site (£1.17M/1%) is better value than Calverley storage site (£1.25M/1%) and means that flood walls are lower. It is then up to policy-makers to make the best decision based on all the available evidence and inevitable constraints. Our hypothetical FASII+ comprises 5 scenarios in total and is reported in [2].

The details are in some sense of secondary importance here; the take-home message is that the FEV analysis offers a protocol to optimise the assessment of mitigation schemes, including cost-effectiveness, in a comprehensible way. In particular, the graphical presentation of the FEV as partitioned flood-excess lakes facilitates quick and direct interpretation of competing schemes and scenarios, and in doing so communicates clearly the evidence needed to make rational and important decisions. Finally, we stress that FEV should be used either prior to or in tandem with more detailed hydrodynamic numerical modelling; nonetheless it offers a complementary way of classifying flood events and enables evidence-based decision-making for flood-mitigation assessment. For more information, including case studies in the UK and France, see [2,3,4]; summarised in [5].

References:

[1] West Yorkshire combined Authority 2016. Leeds city region flood review report. December 2016. https://www.the-lep.com/media/2276/leeds-city-region-flood-review-report-final.pdf

[2] O. Bokhove, M. Kelmanson, T. Kent (2018a): On using flood-excess volume in flood mitigation, exemplified for the River Aire Boxing Day Flood of 2015. Subm. evidence-synthesis article: Proc. Roy. Soc. A. See also: https://eartharxiv.org/stc7r/

[3] O. Bokhove, M. Kelmanson, T. Kent, G. Piton, J.-M. Tacnet (2018b): Communicating nature-based solutions using flood-excess volume for three UK and French river floods. In prep. See also the preliminary version on: https://eartharxiv.org/87z6w/

[4] O. Bokhove, M. Kelmanson, T. Kent (2018c): Using flood-excess volume in flood mitigation to show that upscaling beaver dams for protection against extreme floods proves unrealistic. Subm. evidence-synthesis article: Proc. Roy. Soc. A. See also: https://eartharxiv.org/w9evx/

[5] ‘Using flood-excess volume to assess and communicate flood-mitigation schemes’, poster presentation for ‘Evidence-based decisions for UK Landscapes’, 17-18 September 2018, INI, Cambridge. Available here: http://www1.maths.leeds.ac.uk/~amttk/files/INI_sept2018.pdf

Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography

Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography

by Fabio L. R. Diniz    fabio.diniz@inpe.br

I attended the Workshop on Sensitivity Analysis and Data Assimilation in Meteorology and Oceanography, also known as Adjoint Workshop, which took place in Aveiro, Portugal between 1st and 6th July 2018. This opportunity was given to me due to funding for early career researchers from the Engineering and Physical Sciences Research Council (EPSRC) Data Assimilation for the Resilient City (DARE) project in the UK. All recipients of this fund that were participating for the first time in the workshop were invited to attend the pre-workshop day of tutorials, presenting sensitivity analysis and data assimilation fundamentals geared to the early career researchers. I would like to thank to EPSRC DARE award committee and the organizers of the Adjoint Workshop for finding me worthy of this award.

Currently I’m a post graduate student at the Brazilian National Institute for Space Research (INPE) and have been visiting the Global Modeling and Assimilation Office (GMAO) of the American National Aeronautics and Space Administration (NASA) for almost one year as part of my PhD comparing two approaches to obtain what is known as the observation impact measure. This measure is obtained as a direct application of sensitivity in data assimilation and basically is a measure of how much each observation helps to improve the short-range forecasts. In Meteorology, specifically in numerical weather prediction, these observations are represented by the global observing system, which includes observations obtained from a number of in situ (e.g., radiosondes, and surface observations) and remote sensed observations (e.g., satellite sensors). During my visit, I’ve been working under the supervision of Ricardo Todling from NASA/GMAO comparing results from two strategies for assessing the impact of observations on forecasts using data assimilation system available at NASA/GMAO: one based on the traditional adjoint technique, another based on ensembles. Preliminary results from this comparison were presented during the Adjoint Workshop.

The Adjoint Workshop provided a perfect environment for early career researchers interact with experts in the field from all around the world. The attendance at the workshop has helped me engage healthy discussions about my work and data assimilation in general. The full programme with abstracts and presentations is available at the workshop web site: https://www.morgan.edu/adjoint_workshop

Thanks to everyone who contributed to this workshop.