It’s well known that the ratio of two independent standard normal random variables has a Cauchy distribution. Let’s prove it! Suppose our normal random variables are *X* and *Y*. The kernel of the density is

We do this change of variables:

By a happy chance the Jacobian of this transformation is constant and so, with a little harmless disregard for the region of integration, we substitute directly into the joint density function and get

By inspection we can see that we’ve got the joint density of a Cauchy random variable *C* and, conditional on *C*, an exponential random variable *T* with rate (1 + *C*^{2})/2. (The kernel of the Cauchy density and the rate parameter of the exponential density are shown explicitly in the second line; in the first line they have effectively cancelled out.) Thus we have shown that the ratio of two independent standard normal random variables is indeed Cauchy.

We can do a little more with this result though. It shows how to generate two independent normal random variates from a Cauchy and an Exponential (and one more random bit) via the following recipe. Generate, independently:

- a Cauchy random variable
*C*, - a Rademacher random variable
*S*(which takes the value +1 with 50% probability and the value -1 with 50% probability), and - an Exponential random variable (with rate 1)
*U.*

Set

Independent standard normal random variables aren’t too interesting; this recipe is worth considering because it gives us a way to generate something a lot more fun: three distinct (and very pretty) bivariate distributions, each non-normal but with normal marginal distributions. (Of course, there’s a boring brute force way to make such a distribution: pick any non-Gaussian bivariate copula and push the marginal uniform random variables through the normal quantile function. Yawn.) We’ll do this by making two independent copies of each of the three random variables in our recipe; that is, we’ll have *C*_{i}, *S*_{i}, and *U*_{i} for *i* ∈ {1,2}. Now we use the *i* = 1 variables and follow the recipe as above,

And then we mix the indices to create two more marginally normal variables that have a complicated dependence relation to the previous set:

So what kind of distributions did we make? Well, we’ve defined four distinct marginally normal random variables which gives rise to six bivariate distributions. The three interesting distributions are those of (W, X), (W, Y), and (V, Y). The other three pairs fail to spark joy: X and Y are independent standard normal by definition; V and W would be too except I didn’t bother to give them different signs so their joint distribution is only defined in the first and third quadrants; and it turns out that the pair (V, X) just has the same bivariate distribution as (V, Y).

Let’s sample some, uh, samples and see what we get.

And here’s the code, ready to go.