Category Archives: Research paper

Quasi-Experiments

Caroline Junkins and I are working to identify the impacts of an online summer bridge program (the Mathematics Fluency Initiative, MFI).

For some ideas and context, I’m reading

Suter, WN. Introduction to Educational Research: A Critical Thinking Approach.  2012, Sage.

In Chapter 1, Educators as Critical Thinkers, Table 1.1 presents a number of characteristics of critical thinking (and contrasts it with noncritical thinking).  I am really enjoying this list.  Critical thinkers “consider alternate and multiple perspectives”.  They “consider counter examples and counterevidence”.  They “use analytic judgement, recognizing components of complexity”.  They “use logic, drawing conclusions after weighing evidence”.  They “assess validity of claims”, “sort and recognize missing data”, “consider context and reach tentative, integrative, defensible conclusions”.  They “remains skeptical”, “self-correct”, and “make data-driven, reasoned decisions based on converging evidence”.

Notes on “Particle filters for high dimensional geoscience applications: a review”. van Leeuwen et al 2019

Notes on

Peter Jan van Leeuwen, Hans R. Künsch, Lars Nerger, Roland Potthast, Sebastian Reich.  Q J R Meteorol Soc. 2019;1–31.  Particle filters for high-dimensional geoscience applications: A review

This paper is focusing on the problem of “weight degeneracy” for the weighting of particles in a particle filter.

Introduction:

“the linear data assimilation problem” is hard in a high dimensions.  Numerical weather prediction has 10^9 state variables and 10^7 observations every 6-12 hours.  Two currently methods are 4DVar, Ensemble Kalman Filter (EnKF).  Hybrids of these evidently do okay but need “ad hoc fixes like localization and inflation”.  These methods are harder to use when there’s underlying advective flow, as well.  The linear problem is evidently hard, but actual problems are nonlinear, and these methods don’t work well.

Variational methods can easily fail when the cost function is multimodal, and are hampered by the assumption that the prior probability density function (pdf) of the state is assumed to be Gaussian.”  EnKFs also have a Gaussian prior.  

Evidently particle filters don’t have assumptions on the prior or the likelihood.  (“Particle filters hold the promise of fully nonlinear data assimilation without any assumption on prior or likelihood, and recent textbooks like Reich and Cotter (2015), Nakamura and Potthast (2015), and van Leeuwen et al. (2015) provide useful introductions to data assimilation in general, and particle filters in particular.”)

There are also MCMC methods that are fully nonlinear.

Description of a particle filter (“standard or bootstrap”):

  • choose N model states (“particles”, x_n-1).  These are sampled from the prior pdf.
  • propagate the particles forward in time to the next observation time using the (nonlinear) model (x_n).  Include a random forcing with the propagation (there’s an assumption that physics is missing from the model and the random forcing compensates for that).
  • there’s an observation (y_n) with random measurement errors at this time (with known characteristics)
  • “assimilate” the observations.  Multiply the prior pdf by the likelihood (how likely the current observation is given a current model state, p(y_n | x_n): this hinges on our measurement error.  Is this measurement of y_n possible given the actual state was x_n? ). The product is proportional to p(x_n | y_n), the posterior pdf (the probability of each x_n given the observation that we have).  This step is using Bayes’ theorem.
  • create a weight (related to the probability) for each particle proportional to p(y_n | x_n_i)
  • resampling is common because weight might concentrate in just a few particles.  “This duplicates high weight particles and abandons low weight particles”.  “for particle tilts to work, we need to ensure that their weights remain similar”.

The review deals with “weight degeneracy”.

  1. “proposal-density freedom” and “equal-weight particle filters”
  2. one-step transformations from particles in the prior to particles in the posterior
  3. use localization
  4. combine particle filters with EnKFs

 

Terms to learn about:

4DVar, ensemble Kalman filter, cost function, likelihood, localization