An adequate analytical approximation to the standard normal CDF

It’s not great, but, as my mother-in-law sometimes says, it’ll do in the dark:

\Phi\left(x\right)\approx\tilde{\Phi}\left(x\right)=\frac{\phi\left(0.87x-0.89\right)}{2\phi\left(0.89\right)},\,-3\le x<0.

normal_cdf_approximation_error
Advertisements
6 comments
  1. Entsophy said:

    I just invented word “procrastapost”.

    Definition of procrastapost: an interesting enough blog post which gets written as a way to avoid working on the hard blog posts. A combination of “procrastination” and “blog post”

    • That’s almost what’s going on here, but not quite…

      • Entsophy said:

        Corey, I don’t know about anyone else, but I’d be interested in hearing more about MIRI and CFAR.

        I remember seeing some effusive quote from you about about Eliezer Yudkowsky and being shocked by it. I’ve found your stuff infinitely better than Yudkowsky. Really there’s no comparison. I remember him from Hanson’s blog (Robin Hanson has to be the most overrated economist ever, which is surely one of the harder honors to earn).

        I have a deep deep aversion to any kind of Statistical Theorist that seeks to build a movement rather than simply letting their ideas stand on their own. This suspicion is only confirmed when those theorists asks for donations. How the hell someone could have deep statistical insights yet still need outside benefactors is beyond me.

        Having said that, I’d love to hear what you see in that crowd and would be willing to set aside the above mentioned hesitations.

      • Both MIRI and CFAR have websites that can do a far better job than I can of explaining what they’re about. The short short version would be: CFAR exists to spread and operationalize academically mainstream notions of what constitutes rationality. MIRI exists as a think-tank for AI safety, with an emphasis on so-called “artificial general intelligence” (AGI) safety.

        The perspective promulgated by MIRI is by far the more questionable of the two. The idea is basically that it is inevitable that artificial intelligence with a capacity to constrain future states far exceeding that of humanity will be created; the path to such power AI is I. J. Good’s intelligence explosion. Granting that premise, it’s critical to understand how to stably specify the optimization criteria for such recursively self-improving AI. You can read MIRI’s published and web-released papers for more if that sort of thing interests you.

        As for Eliezer Yudkowsky, I should start be correcting the misapprehension that he’s a statistical theorist or any kind of statistician. To understand what he is, you have to know a little bit about his disposition and a little bit about his history. (What follows is my current understanding and is subject to correction.) He’s very competitive, has a high self-regard, and is to all appearances a bona fide genius (not that that’s a particularly high bar in a population of 7 billion or any guarantee of correctness). In his younger days he desired to create AGI; he immersed himself in subjects such as machine learning and cognitive neuroscience that he expected to be relevant to that task of building a “seed AI”. At some point, he encountered the notion that AGI would not necessarily be ethical by human lights merely on account of its intelligence. Initially he desired to reject the idea but found he could not refute it, and came to believe that AGI safety was the most important thing he could be working on. He therefore changed his focus, and ended up noticing that Löb’s theorem creates an apparently insuperable obstacle to any AI creating a successor that is both trustworthy and more mathematically powerful. So now he focuses on working out ways around that, and also on similarly abstruse difficulties relating to AGI safety.

        I regard EY as an expert on the subjects in which he has taken the time to educate himself. I don’t trust him to have the foundations of other subjects (e.g., economics, physics, statistics) correct, but I do trust him (i) to work out the implications of his premises mostly correctly in such subjects, and (ii) to update his beliefs when confronted with contrary evidence.

  2. Entsophy said:

    I got the “statistical theorist” impression from stuff I’d seen him write about Bayes/rationality and Jaynes. I naively assumed that’s what attracted you to him.

    Do you know these folks personally?

    P.S. we’re all geniuses! Fat lot of good it does us.

    • No, what attracted me to him is stuff like this, which explains the whole point of the effort he’s invested into LessWrong and CFAR. I’m a free rider, more-or-less, relative to what he hopes to achieves with LW and CFAR, so it behooves me to sing what honest praises of him I can (in suitable venues).

      I have met and talked to many of the people involved in MIRI and especially CFAR, but my relationships to any of them does not extend past acquaintanceship to friendship.

      I’m pretty smart, but I’m rarely the smartest person in the room. Even given that I deliberately select rooms with disproportionately smart people, I don’t think I’d call myself a genius.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

In the Dark

A blog about the Universe, and all that surrounds it

Minds aren't magic

Paul Crowley

Mad (Data) Scientist

Musings, useful code etc. on R and data science

djmarsay

Reasoning about reasoning, mathematically.

The Accidental Statistician

Occasional ramblings on statistics

Slate Star Codex

NꙮW WITH MꙮRE MULTIꙮCULAR ꙮ

Models Of Reality

Stochastic musings of a biostatistician.

Data Colada

Thinking about evidence and vice versa

Hacked By Gl0w!Ng - F!R3

Stochastic musings of a biostatistician.

John D. Cook

Stochastic musings of a biostatistician.

Simply Statistics

Stochastic musings of a biostatistician.

Less Wrong

Stochastic musings of a biostatistician.

Normal Deviate

Thoughts on Statistics and Machine Learning

Xi'an's Og

an attempt at bloggin, nothing more...

%d bloggers like this: