<< Chapter < Page Chapter >> Page >

Linear transformation of a random variable

A linear transformation of a random variable X has the following form

Y = a X + b

where a and b are real numbers, and a 0 . A very important property of lineartransformations is that they are distribution-preserving , meaning that Y will be random variable with a distribution of the same form as X . For example, in [link] , if X is Gaussian then Y will also be Gaussian, but not necessarily with the same mean and variance.

Using the linearity property of expectation, find the mean μ Y and variance σ Y 2 of Y in terms of a , b , μ X , and σ X 2 . Show your derivation in detail.

First find the mean, then substitute the result when finding the variance.

Consider a linear transformation of a Gaussian random variable X with mean 0 and variance 1. Calculate the constants a and b which make the mean and the variance of Y 3 and 9, respectively. Using [link] , find the probability density function for Y .

Generate 1000 samples of X , and then calculate 1000 samples of Y by applying the linear transformation in [link] , using the a and b that you just determined. Plot the resulting samples of Y , and use your functions to calculate the sample mean and sample variance of the samples of Y .

Inlab report

  1. Submit your derivation of the mean and variance of Y .
  2. Submit the transformation you used, and the probability density function for Y .
  3. Submit the plot of samples of Y and the Matlab code used to generate Y . Include the calculated sample mean and sample variance for Y .

Estimating the cumulative distribution function

Suppose we want to model some phenomenon as a random variable X with distribution F X ( x ) . How can we assess whether or not this is an accurate model?One method would be to make many observations and estimate the distribution function based on the observed values.If the distribution estimate is “close” to our proposed model F X ( x ) , we have evidence that our model is a good characterization of thephenomenon. This section will introduce a common estimate of thecumulative distribution function.

Given a set of i.i.d. random variables { X 1 , X 2 , . . . , X N } with CDF F X ( x ) , the empirical cumulative distribution function F ^ X ( x ) is defined as the following.

F ^ X ( x ) = 1 N i = 1 N I { X i x } I { X i x } = 1 , if X i x 0 , otherwise

In words, F ^ X ( x ) is the fraction of the X i 's which are less than or equal to x .

To get insight into the estimate F ^ X ( x ) , let's compute its mean and variance.To do so, it is easiest to first define N x as the number of X i 's which are less than or equal to x .

N x = i = 1 N I { X i x } = N F ^ X ( x )

Notice that P ( X i x ) = F X ( x ) , so

P ( I { X i x } = 1 ) = F X ( x ) P ( I { X i x } = 0 ) = 1 - F X ( x )

Now we can compute the mean of F ^ X ( x ) as follows,

E F ^ X ( x ) = 1 N E [ N x ] = 1 N i = 1 N E I { X i x } = 1 N N E I { X i x } = 0 · P I { X i x } = 0 + 1 · P I { X i x } = 1 = F X ( x ) .

This shows that F ^ X ( x ) is an unbiased estimate of F X ( x ) . By a similar approach, we can show that

V a r F ^ X ( x ) = 1 N F X ( x ) ( 1 - F X ( x ) ) .

Therefore the empirical CDF F ^ X ( x ) is both an unbiased and consistent estimate of the true CDF.

Exercise

Write a function F=empcdf(X,t) to compute the empirical CDF F ^ X ( t ) from the sample vector X at the points specified in the vector t .

The expression sum(X<=s) will return the number of elements in the vector X which are less than or equal to s .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Purdue digital signal processing labs (ece 438). OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10593/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Purdue digital signal processing labs (ece 438)' conversation and receive update notifications?

Ask