Presentation Details 

Benning, Martin 
Gradient descent in a Bregman distance framework 
View Abstract
Hide Abstract

We discuss a special form of gradient descent that in the literature has become known as the socalled linearised Bregman iteration. The idea is to replace the classical (squared) two norm metric in the gradient descent setting with a generalised Bregman distance, based on a more general proper, convex and lower semicontinuous functional. Gradient descent as well as the entropic mirror descent by Beck and Teboulle are special cases, as is a specific form of nonlinear Landweber iteration introduced by Bachmayr and Burger. We are going to analyse the linearised Bregman iteration in a setting where the functional we want to minimise is not necessarily Lipschitzcontinuous nor convex, and establish a global convergence result under the additional assumption that the functional satisfies a generalisation of the socalled KurdykaŁojasiewicz property. (This is joint work with Marta M. Betcke, Matthias J. Ehrhard and CarolaBibiane Schönlieb). 
Bodnar, Olha 
Objective Bayesian inference with applications to generalized marginal random effects model 
View Abstract
Hide Abstract

In many applications Bayes theorem is employed using priors that shall represent the absence of prior knowledge. The selection of according priors has long been researched, and several different principles have been suggested. The reference prior of Berger and Bernardo may be viewed as the currently favored one. For singleparameter problems it maximizes the expected KullbackLeibler divergence between posterior and prior, and thus selects the prior such that it is least informative in a specified sense. We derive a noninformative prior by sequential maximization of Shannon's mutual information in the multigroup parameter case assuming reasonable regularity conditions. It is shown that the derived prior coincides with the reference prior proposed by Berger and Bernardo, and that it can be considered as useful alternative expression for the calculation of the reference prior. We further give an explicit expression of the reference prior for the generalized marginal random effects model and prove propriety of the resulting posterior. The frequentist properties of the proposed inference are investigated through simulations and the robustness is studied when the underlying distributional assumptions are violated. Finally, we apply the model to the adjustment of current measurements of the Planck constant and several data sets taken from medical science. 
Briol, FrancoisXavier 
A probabilistic numerics approach to Monte Carlo integration 
View Abstract
Hide Abstract

The recent surge in data available to scientists has led to an increase in the complexity of mathematical models, rendering them much more computationally expensive to evaluate. This is a particular challenge for inference and prediction, since these tasks will often require the numerical computation of integrals which will tend to be slow. Indeed, standard numerical integration methods rely on evaluating integrands (in this case the expensive mathematical models) many times, in order to approximate solutions up to a very small numerical error.
This talk will introduce an emerging area of research called Probabilistic Numerics, which aims at tackling these problems by taking a statistical approach to numerical analysis, and as such provide numerical methods for which numerical error can be controlled in a probabilistic way. We will study a probabilistic integration method called Bayesian Quadrature, which provides solutions in the form of probability distributions (rather than point estimates) whose structure can provide additional insight into the uncertainty emanating from the finite number of integrand evaluations. We will then demonstrate convergence and contraction rates in the number of integrand evaluations. Finally, we will compare Bayesian Quadrature with Monte Carlo and QuasiMonte Carlo methods and illustrate its performance on an application in computer graphics. 
Chada, Neil 
Reduced basis method: computational speedup of inverse problems? 
View Abstract
Hide Abstract

Uncertainty quantification (UQ) has developed as a new, diverse and multidisciplinary field of mathematics. The general goal is to ensure a better understanding of models by tackling the uncertainty that is associated with it. This uncertainty can arise in various ways such as boundary & initial conditions, noise within the system and the geometry of the domain. One class of UQ methods that has emerged within the 21st Century are reduced order models (ROMs). The main motivation behind ROMs was to account for the increasing computational cost in high dimensional problems.
This talk will consist of a particular ROM which is the reduced basis method. There has been a great deal of recent research on this method, however mainly related to forward problems. The talk will look at applying this method for inverse problems, where we are interested in the recovery of the input, in a Bayesian setting. We propose a new algorithm that tackles the computational burden of inverse problems. 
Chretien, Stephane 
Optimal design for quadratic observations 
View Abstract
Hide Abstract

When quadratic measurements are taken, the standard theory of optimal design cannot be applied. In this presentation, we explore the SemiDefinite relaxation of the quadratic measurement problem and deduce a new Eoptimality like criterion based on the smallest conic eigenvalue of the covariance matrix of the estimated quantity. We apply this new approach to optimal sensor placement in power grids. 
Christie, Mike 
Bayesian hierarchical models for measurement error 
View Abstract
Hide Abstract

The detailed geological description of oil reservoirs is always uncertain because of the large size and relatively small number of wells from which hard data can be obtained. To handle this uncertainty, reservoir models are calibrated or ‘history matched’ to production data (oil rates, pressures etc). The quality of any reservoir forecasts depends not only on the quality of the match, but also how well understood the measurement errors are (or indeed the split between measurement and modelling errors).
This talk will look at hierarchical models for estimating measurement and modelling errors in reservoir model calibration, and compare maximum likelihood estimates of measurement errors with marginalisation over unknown errors. 
Davies, Russell 
Determining the relaxation spectra of polymers from oscillatory shear measurements 
View Abstract
Hide Abstract

The relaxation spectrum of a viscoelastic material holds the key to describing its relaxation mechanisms at a molecular level. The relaxation spectrum cannot be measured directly, but it may be locally determined from experimental measurements of viscoelastic response at a macroscopic level. Although mathematical expressions for the continuous spectrum have been known for over a century, these were inaccessible to numerical implementation for decades, since they involve inverse operators which are not continuous, resulting in severe instability. In this talk I present a method of wavelet regularization for recovering continuous relaxation spectra in a mathematically rigorous framework. The method relies on representing inverse convolution operators as a sequence of differential operators, which also has relevance to the deconvolution of experimental measurements in a more general context than polymer characterization. 
Demeyer, Séverine 
Surrogate model based estimation of probabilities of failure of computationally expensive systems 
View Abstract
Hide Abstract

The use of computational codes has become common practice when physical experiments are not feasible or when too few are feasible. The statistical modelling of numerical experiments with kriging models yields a probabilistic decision framework to assess the probability of failure of the system. Combining fast lowfidelity simulations with costly highfidelity simulations has proved an efficient method to decrease the burden of costly simulations when predicting the output of a system. In addition, sequential design is commonly used to estimate the probability of failure of a system modelled by kriging. In this work, a methodology is derived to benefit from sequential design in a multifidelity framework to predict the probability of failure of a computationally expensive system and its uncertainty. The methodology is applied to a fire safety engineering case study to assess the probability of nonconformity of a smoke control system from complex numerical fire tools. 
Estep, Donald 
A new approach to stochastic inverse problems for scientific inference 
View Abstract
Hide Abstract

The stochastic inverse problem for determining parameter values in a physics model from observational data on the output of the model forms the core of scientific inference and engineering design. We describe a recently developed formulation and solution method for stochastic inverse problems that is based on measure theory and a generalization of a contour map. In addition to a complete analytic and numerical theory, advantages of this approach include avoiding the introduction of ad hoc statistics models, unverifiable assumptions, and alterations of the model like regularization. We present a highdimensional application to determination of parameter fields in storm surge models. We conclude with recent work on defining a notion of condition for stochastic inverse problems and the use in designing sets of optimal observable quantities. 
Freitag, Melina 
Balanced truncation model order reduction for stochastically controlled linear systems 
View Abstract
Hide Abstract

When solving linear stochastic differential equations numerically, usually a high order spatial discretisation is used.
Balanced truncation (BT) is a wellknown projection technique in the deterministic framework which reduces the order of a control system and hence reduces computational complexity. In this talk we give an introduction to model order reduction by balanced truncation and then consider a differential equation where the control is replaced by a noise term. We provide theoretical tools such as stochastic concepts for reachability and observability, which are necessary for balancing related model order reduction of linear stochastic differential equations with additive L'evy noise. Moreover, we derive error bounds for BT and provide numerical results for a specific example which support the theory. This is joined work with Martin Redmann (WIAS Berlin). 
Harris, Peter 
Solving large structured nonlinear leastsquares problems, with an application in earth observation 
View Abstract
Hide Abstract

Harmonisation involves the recalibration of sensors on satellites in orbit using data recorded when pairs of sensors are observing the same part of the Earth at the same time. Harmonisation is essential to creating records of climate variables, such as sea surface temperature and forest leaf area, that are consistent over long periods (perhaps decades or longer), and consequently allow reliable decisions to be made about whether such variables are changing. We describe the background to the measurement problem, and its mathematical formulation as a large structured nonlinear leastsquares problem. The mathematical problem has a number of characteristics that make it challenging to solve: (a) it involves large numbers of measured data and parameters to be estimated (perhaps tens of millions or greater), (b) the measured data can be correlated, and (c) it is required to provide sensor calibrations with associated uncertainty information. We describe our attempts to solve the harmonisation problem accounting for those characteristics. 
Higham, Des 
Monte Carlo efficiency 
View Abstract
Hide Abstract

I will analyze and compare the computational complexity of different simulation strategies for continuous time Markov chains. I consider the task of approximating the expected value of some functional of the state of the system over a compact time interval. This task is a bottleneck in many largescale computations. In this context, the terms 'Gillespie's method', 'The Stochastic Simulation Algorithm' and 'The Next Reaction Method' are widely used to describe exact simulation methods. For example, Google Scholar records more than 6,000 citations to Gillespie's seminal 1977 paper. I will look at the use of standard Monte Carlo when samples are produced by exact simulation and by approximation with tauleaping or an EulerMaruyama discretization of a diffusion approximation. In particular, I will point out some possible pitfalls when computational complexity is analysed. Appropriate modifications of recently proposed multilevel Monte Carlo algorithms will then be studied for the tauleaping and EulerMaruyama approaches. I will pay particular attention to a parameterization of the problem that, in the mass action chemical kinetics setting, corresponds to the classical system size scaling. 
Husmeier, Dirk 
Parameter inference and model selection in a partial differential equation model of cell migration, applied to highresolution microscopy data 
View Abstract
Hide Abstract

Collective cell movement is a key component of many important biological processes, including wound healing, the immune response and the spread of cancers. To understand and influence these movements, we need to be able to identify and quantify the contribution of their different underlying mechanisms. Here, we define a set of six candidate models—formulated as advection–diffusion–reaction partial differential equations—that incorporate a range of cell movement drivers. We fitted these models to movement assay data from two different cell types: Dictyostelium discoideum and human melanoma. Model comparison using widely applicable information criterion suggested that movement in both of our study systems was driven primarily by a selfgenerated gradient in the concentration of a depletable chemical in the cells' environment. For melanoma, there was also evidence that overcrowding influenced movement. These applications of model inference to determine the most likely drivers of cell movement indicate that such statistical techniques have potential to support targeted experimental work in increasing our understanding of collective cell movement in a range of systems. This is joint work with Elaine Ferguson, Jason Matthiopoulos and Robert Insall. 
Livina, Valerie 
Tipping point analysis of ocean acoustic noise 
View Abstract
Hide Abstract

We study time series using tipping point analysis techniques for anticipation, detection, and forecast of tipping points in a dynamical system. The methodology combines degenerate fingerprinting and potential analysis. It has been extensively tested on artificial data and on various geophysical, ecological and industrial sensor datasets [210], and proved to be applicable to trajectories of dynamical systems of arbitrary origin [11].
We apply tipping point analysis to acoustic data extracted from the portal of the Preparatory Commission for the Comprehensive NuclearTestBan Treaty Organisation (CTBTO). The data of the Cape Leeuwin hydrophone is a long record (20032015) of 250Hz sampling of sound pressure (3Tb of binary waveforms, 35Tb of extracted signal, 96G points in time series). We obtain 10minute averages of sound pressure level in five frequency bands and analyse the components of the data: trends, system potential, seasonality and fluctuations. We propose a stochastic model approximating the system dynamics and discuss possible upcoming tipping in the 21st century [12].
References: [1] Livina and Lenton, GRL 2007; [2] Livina et al, CoP 2010; [3] Livina et al, CD 2011; [4] Livina et al, PhysA 2012; [5] Livina and Lenton, Cryosphere 2012; [6] Livina et al, PhysA 2013; [7] Livina et al, JCSHM 2014; [8] Kefi et al, PLoS ONE 2014; [9] Livina et al, Chaos 2015; [10] Perry et al, SMS 2016; [11] Vaz Martins et al, PRE 2010; [12] Livina et al, in preparation. 
Lloyd, David 
Data assimilation for selfexcitation processes 
View Abstract
Hide Abstract

Applications often involve spatiotemporal counting data governed by a background process and a 'selfexcitation' process. Examples include Twitter data, burglary crime and mobile phone usage. In this talk, we will concentrate on the burglary example and illustrate the how one can model and carry out the data assimilation. We will draw conclusions for other applications and outline a broad mathematical framework for data assimilation of counting data. 
Pestana, Jennifer 
Null space preconditioners for saddle point problems 
View Abstract
Hide Abstract

Linear systems with saddle point structure arise throughout constrained optimisation. When the system is large and sparse it is typically solved by an iterative method. However, these methods are only efficient if they find a good approximation to the solution of the linear system in a few iterations.
In many saddle point problems, fast convergence occurs only after preconditioning, i.e. after transforming the linear system to an equivalent one with better properties. Here, we present a family of nullspace preconditioners for saddle point problems and analyse their effectiveness for constrained optimisation problems. 
Possolo, Antonio 
Plurality of type A evaluations of measurement uncertainty 
View Abstract
Hide Abstract

Type A evaluations of measurement uncertainty are those that involve the application of statistical methods to experimental data. This presentation will illustrate the plurality of alternative methods that may be used for this purpose, and that produce different evaluations when they are applied to the same data, depending on the assumptions that seem warranted. The examples involve measurements of the mass fraction of vanadium in a reference material, of a halocarbon in air, and of the age of a meteorite. 
Ramage, Alison 
A multilevel preconditioner for data assimilation with 4DVar 
View Abstract
Hide Abstract

Largescale variational data assimilation problems are commonly found in applications like numerical weather prediction and oceanographic modelling. The 4DVar method is frequently used to calculate a forecast model trajectory that best fits the available observations to within the observational error over a period of time. One key challenge is that the state vectors used in realistic applications could contain billions or trillions of unknowns so, due to memory limitations, in practice it is often impossible to assemble, store or manipulate the matrices
involved explicitly. In this talk we present a limited memory approximation to the Hessian of the linearised quadratic minimisation subproblems, computed using the Lanczos method, based on a multilevel approach. We then use this approximation as a preconditioner within 4DVar and show that it can reduce memory requirements and increase computational efficiency. This is joint work with Kirsty Brown (University of Strathclyde) and Igor Gejadze (IRSTEA, Montpellier). 
Roulstone, Ian 
Data assimilation and modelling the carbon cycle 
View Abstract
Hide Abstract

We describe a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALEC2. Ecological and dynamical constraints are employed to constrain unresolved components of an otherwise illposed problem. Using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matrices to diagnose the nature of the illposedness and evaluate regularisation strategies. 
Scott, Marian 
Measurement in archaeological dating 
View Abstract
Hide Abstract

Proficiency trial design and analysis of the results has played an important role in the quality assurance of radiocarbon dating. I will provide a brief overview of some of the issues, including homogeneity testing of samples, choice of blanks, and assessment of lab bias and precision. 
Vernon, Ian 
Bayesian computer model analysis of Robust Bayesian analyses 
View Abstract
Hide Abstract

Bayesian methodology is now widely employed across many scientific areas where measured data is combined with scientific expertise in one coherent structure. However, due to the complexity of the Bayesian models now employed, and due to the computational expense of evaluating them using suitable numerical schemes e.g. MCMC, our ability to perform even basic robust or sensitivity analysis to the many choices and assumptions made when constructing the analysis, has been greatly diminished.
We harness the power of Bayesian emulation techniques, designed to aid the analysis of complex computer models, to examine the structure of complex Bayesian analyses themselves. These techniques facilitate robust Bayesian analyses and/or sensitivity analyses of complex problems, and hence allow global exploration of the impacts of choices made in both the likelihood and prior specification. We show how previously intractable problems in robustness studies can be overcome using emulation techniques, and how these methods allow other scientists to quickly extract approximations to posterior results corresponding to their own particular subjective specification. The utility and flexibility of our method is demonstrated on a reanalysis of a real application where Bayesian methods were employed to capture beliefs about river flow. We discuss the obvious extensions of such an approach. 
Wright, Louise 
Mathematics for measurement and design of advanced materials 
View Abstract
Hide Abstract

Advances in materials processing techniques have led to development of material systems designed to have particularly desirable properties. Examples include coated systems for thermal protection of expensive components in turbines, fibrereinforced composites that combine stiffness in one loading direction with flexibility in another, and graphenereinforced composites with enhanced electrical conductivity.
The design of these material systems requires an understanding of how the size, shape, properties and dispersion of the various phases of the system affect its effective properties. This understanding requires a mathematical model to predict the effective properties of the system, and measurement of the individual properties of the system components.
In many cases the properties of individual components are difficult to obtain directly. Most techniques for measurement of material properties rely on the assumption that the material being measured is uniform and solid, but it can be difficult to obtain isolated samples of a suitable size of (for instance) all of the materials in a sprayed coating system.
This talk will discuss the development and assessment of models for the effective properties of materials systems, and the use of optimisation and uncertainty evaluation to obtain the properties of the components of materials systems. (Coauthored by Neil McCartney and Davin Lunz). 