Error statistics aims to provide a philosophical foundation for the application of frequentist statistics in science. As in any frequency-based approach, error statistics adheres to what I consider to be the fundamental tenet of frequentist learning: any particular data-based inference is deemed well-warranted only to the extent that it is the result of a procedure with good sampling characteristics.
What kind of procedures are we talking about, and what characteristics of those procedures ought we to care about? The error statistical approach distinguishes itself from other frequentist frameworks (e.g., frequentist statistical decision theory) by the answer it gives to that question. Read More
- Cox-Jaynes foundations. In which I establish my Bayesian bona fides.
- Mayo’s error statistics and the Severity Principle. In which I give my current understanding of error statistics. Also, first howler!
- Howler, howler, howler. In which I show how the severity concept defeats many “howlers”: common criticisms of frequentist approaches propagated in Bayesian articles and textbooks. Probably more than one post.
- Two severities. In which I discuss points of contact between the severity approach and the Bayesian approach, and specify a simple model in which the two approaches, operating on the exact same information, must disagree.
- Increasing the magnification. In which I analyze the model of the previous post, subjecting it to the most extreme conditions so as to magnify the differences between the severity approach and the Bayesian approach. As of the time of the writing of this blogging agenda, I have not done so. I do not know which approach, if either, will fail under high magnification. This is to be a true test of my Bayesianism.
Not every post will contribute to the completion of above agenda; I also plan to write posts sharing my thoughts on any interesting statistical topic I happen to run into. Posts appear below in reverse chronological order; scroll to the bottom for the first one.