Error statistics aims to provide a philosophical foundation for the application of frequentist statistics in science. As in any frequency-based approach, error statistics adheres to what I consider to be the fundamental tenet of frequentist learning: any particular data-based inference is deemed well-warranted only to the extent that it is the result of a procedure with good sampling characteristics.
What kind of procedures are we talking about, and what characteristics of those procedures ought we to care about? The error statistical approach distinguishes itself from other frequentist frameworks (e.g., frequentist statistical decision theory) by the answer it gives to that question. Read More
Why these postulates?
Recall Cox’s five postulates:
- Cox-plausibilities are real numbers.
- Consistency with Boolean algebra: if two claims are equal in Boolean algebra, then they have equal Cox-plausibility.
- There exists a conjunction function f such that for any two claims A, B, and any prior information X,
- There exists a negation relation (actually a function too). In slightly different notation that in the previous post, it is:
- The negation relation and conjunction function (and their domains) satisfy technical regularity conditions.
(I’m frustrated with the length this post and how much time it’s taking me to finish, so I’m splitting it into two parts.)
I subscribe to a school of thought some call “Jaynesian” after Edwin T. Jaynes. Its foundation is a theorem of Richard T. Cox, a physicist who studied electric eels, not to be confused with the eminent statistician Sir David R. Cox. Since my first project will be to engage with Professor Mayo’s diametrically opposed views on the proper way to use (and think about the use of) statistics in science, it seems worthwhile to describe the theorem and the reasons I take it to be foundational to statistics — of the Bayesian variety, at least.
- Cox-Jaynes foundations. In which I establish my Bayesian bona fides.
- Mayo’s error statistics and the Severity Principle. In which I give my current understanding of error statistics. Also, first howler!
- Howler, howler, howler. In which I show how the severity concept defeats many “howlers”: common criticisms of frequentist approaches propagated in Bayesian articles and textbooks. Probably more than one post.
- Two severities. In which I discuss points of contact between the severity approach and the Bayesian approach, and specify a simple model in which the two approaches, operating on the exact same information, must disagree.
- Increasing the magnification. In which I analyze the model of the previous post, subjecting it to the most extreme conditions so as to magnify the differences between the severity approach and the Bayesian approach. As of the time of the writing of this blogging agenda, I have not done so. I do not know which approach, if either, will fail under high magnification. This is to be a true test of my Bayesianism.
Not every post will contribute to the completion of above agenda; I also plan to write posts sharing my thoughts on any interesting statistical topic I happen to run into.