<< Chapter < Page | Chapter >> Page > |
Armed with the tools of matrix derivatives, let us now proceed to find in closed-form the value of $\theta $ that minimizes $J\left(\theta \right)$ . We begin by re-writing $J$ in matrix-vectorial notation.
Given a training set, define the design matrix $X$ to be the $m$ -by- $n$ matrix (actually $m$ -by- $n+1$ , if we include the intercept term) that contains the training examples' input values in its rows:
Also, let $\overrightarrow{y}$ be the $m$ -dimensional vector containing all the target values from the training set:
Now, since ${h}_{\theta}\left({x}^{\left(i\right)}\right)={\left({x}^{\left(i\right)}\right)}^{T}\theta $ , we can easily verify that
Thus, using the fact that for a vector $z$ , we have that ${z}^{T}z={\sum}_{i}{z}_{i}^{2}$ :
Finally, to minimize $J$ , let's find its derivatives with respect to $\theta $ . Combining the second and third equation in [link] , we find that
Hence,
In the third step, we used the fact that the trace of a real number is just the real number; the fourth step used the fact that $\mathrm{tr}A=\mathrm{tr}{A}^{T}$ , and the fifth step used Equation [link] with ${A}^{T}=\theta $ , $B={B}^{T}={X}^{T}X$ , and $C=I$ , and Equation [link] . To minimize $J$ , we set its derivatives to zero, and obtain the normal equations :
Thus, the value of $\theta $ that minimizes $J\left(\theta \right)$ is given in closed form by the equation
When faced with a regression problem, why might linear regression, and specifically why might the least-squares cost function $J$ , be a reasonable choice? In this section, we will give a set of probabilistic assumptions, under which least-squares regressionis derived as a very natural algorithm.
Let us assume that the target variables and the inputs are related via the equation
where ${\u03f5}^{\left(i\right)}$ is an error term that captures either unmodeled effects (such as if there are some features very pertinentto predicting housing price, but that we'd left out of the regression), or random noise. Let us further assume that the ${\u03f5}^{\left(i\right)}$ are distributed IID (independently and identically distributed) accordingto a Gaussian distribution (also called a Normal distribution) with mean zero and some variance ${\sigma}^{2}$ . We can write this assumption as “ ${\u03f5}^{\left(i\right)}\sim \mathcal{N}(0,{\sigma}^{2})$ .” I.e., the density of ${\u03f5}^{\left(i\right)}$ is given by
This implies that
The notation “ $p\left({y}^{\left(i\right)}\right|{x}^{\left(i\right)};\theta )$ ” indicates that this is the distribution of ${y}^{\left(i\right)}$ given ${x}^{\left(i\right)}$ and parameterized by $\theta $ . Note that we should not condition on $\theta $ (“ $p\left({y}^{\left(i\right)}\right|{x}^{\left(i\right)},\theta )$ ”), since $\theta $ is not a random variable. We can also write the distribution of ${y}^{\left(i\right)}$ as as ${y}^{\left(i\right)}\mid {x}^{\left(i\right)};\theta \sim \mathcal{N}({\theta}^{T}{x}^{\left(i\right)},{\sigma}^{2})$ .
Given $X$ (the design matrix, which contains all the ${x}^{\left(i\right)}$ 's) and $\theta $ , what is the distribution of the ${y}^{\left(i\right)}$ 's? The probability of the data is given by $p\left(\overrightarrow{y}\right|X;\theta )$ . This quantity is typically viewed a function of $\overrightarrow{y}$ (and perhaps $X$ ), for a fixed value of $\theta $ . When we wish to explicitly view this as a function of $\theta $ , we will instead call it the likelihood function:
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?