Two (or more) random variables can be defined over the same
sample space. Just as with jointly defined events, the
joint distribution function is easily defined.
$P(X, , Y, x, y)\equiv (\{X\le x\}\cap \{Y\le y\})$
The
joint probability density function
$p(X, , Y, x, y)$ is related to the distribution function via double
integration.
$P(X, , Y, x, y)=\int_{()} \,d $∞
x
∞
y
p
X
Y
or
$$p(X, , Y, x, y)=\frac{\partial^{2}P(X, , Y, x, y)}{\partial x\partial y}$$ Since
$\lim_{y\to}y\to $∞
P
X
Y
x
y
P
X
x , the socalled
marginal density functions can be related to the joint density function.
$p(X, x)=\int_{()} \,d $∞
∞
p
X
Y
x
and
$$p(Y, y)=\int_{()} \,d $$∞
∞
p
X
Y
y
Extending the ideas of conditional probabilities, the
conditional probability density function
${p}_{XY}(xY=y)$ is defined (when
$p(Y, y)\neq 0$ ) as
${p}_{XY}(xY=y)=\frac{p(X, , Y, x, y)}{p(Y, y)}$
Two random variables are
statistically independent when
${p}_{XY}(xY=y)=p(X, x)$ , which is equivalent to the condition that the joint
density function is separable:
$p(X, , Y, x, y)=p(X, x)p(Y, y)$ .
For jointly defined random variables, expected values are
defined similarly as with single random variables. Probably themost important joint moment is the
covariance :
$\mathrm{cov}(X, Y)\equiv (XY)(X)(Y)$
where
$$(XY)=\int_{()} \,d y$$∞
∞
x
∞
∞
x
y
p
X
Y
x
y Related to the covariance is the (confusingly named)
correlation coefficient : the covariance normalized
by the standard deviations of the component random variables.
$${p}_{X,Y}=\frac{\mathrm{cov}(X, Y)}{{}_{X}{}_{Y}}$$ When two random variables are
uncorrelated , their
covariance and correlation coefficient equals zero so that
$(XY)=(X)(Y)$ . Statistically independent random variables are
always uncorrelated, but uncorrelated random variables can bedependent.
Let
$X$ be uniformly distributed over
$\left[1 , 1\right]$ and let
$Y=X^{2}$ . The two random variables are uncorrelated, but are
clearly not independent.
A
conditional expected value is the mean of the
conditional density.
$(Y, X)=\int_{()} \,d x$∞
∞
p
X

Y
x

Y
=
y Note that the conditional expected value is now a function of
$Y$ and is therefore a random
variable. Consequently, it too has an expected value, which is
easily evaluated to be the expected value of
$X$ .
$((Y, X))=\int_{()} \,d y$∞
∞
x
∞
∞
x
p
X

Y
x

Y
=
y
p
Y
y
X More generally, the expected value of a function of two random
variables can be shown to be the expected value of a conditionalexpected value:
$(f(X, Y))=((Y, f(X, Y)))$ . This kind of calculation is frequently simpler to
evaluate than trying to find the expected value of
$f(X, Y)$ "all at once." A particularly interesting example of
this simplicity is the
random sum of random
variables . Let
$L$ be a
random variable and
$\{{X}_{l}\}$ a sequence of random variables. We will find occasion
to consider the quantity
$\sum_{l=1}^{L} {X}_{l}$ . Assuming that each component of the sequence has the
same expected value
$(X)$ , the expected value of the sum is found to be
$({S}_{L})=((L, \sum_{l=1}^{L} {X}_{l}))=(L(X))=(L)(X)$