“in social science it’s notorious that every variable is correlated with every other variable, at least a little bit. I imagine that this makes Pearl-style causal inference a big pain — all of the causal graphs would end up totally connected, or close to”

I’m surprised you would make this point. A consequence of Pearl-ian concepts such as “back-door paths” are how connectivity induces correlations. You only need a sparsely connected graph to obtain correlations everywhere. i.e. correlations being dense does not mean that causality is not sparse.

]]>I don’t know of any good ideas about how this limiting operation is supposed to happen in real problems, so I’m personally leery of the continous case.

]]>“The reconstructed function looked very noisy and we sent him the result. Rissanen could not let this stand…”

I always find it striking when a human brain’s evaluation of some algorithmic result is unfavorable. It makes me wonder what algorithm that brain is implementing…

In our discussion above I actually linked to a page about Solomonoff’s work on algorithmic complexity when I wrote that I would exclude non-computable probability measures as possible models. (The default style of links on this blog is rather subtle — just a thin underline in white — so maybe it escaped your notice.) I’m rather fond of that work, in good part because Solomonoff was seeking (and found!) the ultimate Bayesian prior… (modulo the universal Turing machine, as you note).

Richard Gill’s abstract of his paper on the Two Envelopes puzzle made me chuckle.

]]>