Events Calendar

Previous month Previous day Next day Next month
By Year By Month By Week Today Search Jump to month
[Seminars]
Bayesian Computation for High-Dimensional Statistical Models -IMS
A/Prof Matti Vihola, Prof Joaquin Miguez and Dr Tiangang Cui
University of Jyvaskyla, Universidad Carlos III de Madrid and Monash University respectively
 
Wednesday 05 September 2018, 02:00pm - 05:00pm
S16-06-118, Seminar Room
 

 

Talk 1:      2pm to 2.50pm

Speaker:   Assoc Prof Matti Vihola, University of Jyvaskyla, Finland

Title:        Unbiased Estimators and Multilevel Monte Carlo

Multilevel Monte Carlo (MLMC) and recently proposed debiasing schemes are closely related methods which can be applied in scenarios where exact simulation methods are difficult to implement, but biased estimators are easily available. An important example of such a scenario is the inference with continuous-time diffusion processes, where the process is difficult to simulate exactly but time-discretized approximations are available. I will present a new general class of unbiased estimators which admits earlier debiasing schemes as special cases, and new lower variance estimators which behave asymptotically like MLMC, both in terms of variance and cost, under general conditions. This suggests that bias can often be eliminated entirely with arbitrarily small extra cost.

 

 

Talk 2:      2.50pm to 3.40pm

Speaker:   Prof Joaquin Miguez, Universidad Carlos III Madrid, Spain

Title:        Stable Approximation Schemes for Optimal Filters

We explore truncation schemes for the approximation of (possibly unstable) optimal filters. In particular, let S = (π0, κt, gt) be a state space model defined by a prior distribution π0, Markov kernels {κt} and potential functions {gt}, t ≥ 1, and let {Ct} be a sequence of compact subsets of the state space. In the first part of the talk, we describe a simple and systematic procedure to construct a system S­ = (π0, κ­ t, g­ t), with potentials truncated within the sets {Ct}, such that the optimal filters generated by S and S­ can be made arbitrarily close, with approximation errors independent of time t. Then, in the second part, we investigate the stability of the approximate optimal filters. Specifically, given a system S with a prescribed prior π0, we seek sufficient conditions to guarantee that the truncated system S­ (with the same prior π0) generates a sequence of optimal filters which are stable and, at the same time, can attain arbitrarily small approximation errors.

 

 

3.40pm to 4pm COFFEE BREAK, Light refreshments will be served

 

 

Talk 3:      4pm to 4.50pm

Speaker:   Dr Tiangang Cui, Monash University, Australia

Title:        Subspace Acceleration for Large-Scale Bayesian Inverse Problems

Algorithmic scalability to high-dimensional models is one of the central challenges in solving large-scale Bayesian inverse problems. By exploiting the interaction among various information sources and model structures, we will present a set of certified dimension reduction methods for identifying the intrinsic dimensionality of inverse problems. The resulting reduced dimensional subspaces offer new insights into the acceleration of classical Bayesian inference algorithms for solving inverse problems. We will discuss some old and new algorithms that can be significantly accelerated by the reduced dimensional subspaces.