Peter’s research currently focuses on Safety and Luckiness. The basic idea is to make sure that inference from data is done in – indeed – a safer way. The replicability crisis in the applied sciences provides ample evidence that we often jump to conclusions which simply aren’t justified. The goal is to improve this situation! The resulting procedures point towards a unified view of statistical inference, in which specific Bayesian, frequentist and even ‘fiducial’ methods arise as special cases; “prior distributions” become just one aspect of luckiness. Subtopics include:
- Safe Testing: Hypothesis Testing and Model Choice under Optional Stopping and Optional Continuation – see G. & De Heide & Koolen, Safe Testing, 2019.
- Safe Bayesian Inference: Reparing Bayesian inference under misspecification (when the model is wrong, but useful) – see G. & Van Ommen, 2017, G. 2011, G. 2012
- Safe Probability: working with probability distributions that only capture parts, not all of your domain of interest – see G. 2017, Van Ommen, Koolen and G 2016.
- Luckiness in Learning: quantifying how many data are needed to reach conclusions of a desired quality in machine learning and sequential prediction, with generalized Bayesian methods, PAC-Bayesian methods and MDL methods that automatically adopt to the inherent ‘easiness’ of the learning task – see De Rooij et al. 2014; Van Erven et al., 2015; G. and Mehta 2017b, Koolen, G. and Van Erven, 2016, G. and Mehta, 2017a.