# Blogging Agenda

*Cox-Jaynes foundations.*In which I establish my Bayesian*bona fides*.*Mayo’s error statistics and the Severity Principle.*In which I give my current understanding of error statistics. Also, first howler!*Howler, howler, howler*. In which I show how the severity concept defeats many “howlers”: common criticisms of frequentist approaches propagated in Bayesian articles and textbooks. Probably more than one post.*Two severities*. In which I discuss points of contact between the severity approach and the Bayesian approach, and specify a simple model in which the two approaches, operating on the exact same information, must disagree.*Increasing the magnification*. In which I analyze the model of the previous post, subjecting it to the most extreme conditions so as to magnify the differences between the severity approach and the Bayesian approach. As of the time of the writing of this blogging agenda, I have not done so. I do not know which approach, if either, will fail under high magnification. This is to be a true test of my Bayesianism.

Not every post will contribute to the completion of above agenda; I also plan to write posts sharing my thoughts on any interesting statistical topic I happen to run into. Posts appear below in reverse chronological order; scroll to the bottom for the first one.

What ever happened to the promised posts? I think you might start with optional stopping to set the stage for the contrast that seems to have been sidestepped in the recent exchange. Any case where outcomes other than the one observed are taken account of post data is a case where sampling theory conflicts with Bayesianism, unless the latter is prepared to violate the strong likelihood principle. And of course SEV does not obey the probability calculus. SEV(H) and SEV(~H) can both be small. Strictly, we shouldn’t even write H, ~H, as if just the stark statistical hypotheses need to be considered. When the SEV associated with a test comes up for scrutiny, its the whole test, including the various assumptions linking statistical to scientific claims, issues interpreting the statistics, and the model assumptions.

The promised posts are still on the agenda. I’m writing up my first substantive post as I get free time and energy to work on it. You originally suggested one howler decried every six weeks or so; that pace is about what I’ll be managing…

Incidentally, I have some questions about the appropriate way to calculate SEV in certain situations; I’ll email you when I’m ready to get on with that calculation.

Corey: I’m not bugging you at all, just jokily needling. You see, a couple of people informed me about exchanges on severity from another blog, and noted that the comments included you, so I figured I’d find them here, but didn’t, obviously. On computing SEV, do send your query, but you know that as with other proper uses of error statistical assessments, the precise number (from formal statistics) rarely matters, but reaching a threshold, triangulating with other measurements and (ideally) theory,other error probes, etc. does matter. Equally important is being able to discern when conjectured solutions to problems have not been well probed (e.g., because of an inability to distinguish adequately sources of error in linking statistical to substantive claims, in interpreting the statistics, and in underlying model assumptions). We always want to know what has not yet passed with severity (in the overall inquiry), what the current studies are unable to discern, what precautions to take in future inquiry. Of course, to describe what has not yet been well probed is very different from trying to describe what has not yet been probabilified (whatever that means).

I’m sure I’m not alone in finding it easier to comment on someone else’s interesting thoughts than to generate interesting thoughts of my own ab initio. Speaking of which, I’ve replied to comments of yours on LessWrong.

I might blog or reblog something interesting on topics other than severity, but on the latter topic I want to follow the narrative of my agenda. (In the blue corner, our hero/villain Bayes! (Yay/boo!) In the red corner, the nasty/plucky challenger Severity! (Boo/yay!) Wince/cheer as Severity exposes the howlers! Then… the Final Showdown!)

Regarding my query, I’m “bracketing” (to appropriate a term) most of the substantive scientific issues and narrowing my focus to a statistical toy model just barely complicated enough to distinguish a formal severity analysis from a formal Bayesian analysis. I hope to avoid the “precise number” question by driving the parameters of the toy model to extremes, thereby magnifying the Bayes/severity distinction. I’ll send details by email.

Looking forward to it!