<< Chapter < Page Chapter >> Page >
This page explains how to set up our face recognition system for detection. It is centered around the creation of the "eigenface" basis for "face space." It also discusses simplifying the eigenface basis to a level that is both managable and accurate.

Introduction to eigenface system

The eigenface face recognition system can be divided into two main segments: creation of the eigenface basis and recognition, or detection, of a new face. The system follows the following general flow:

Summary of overall face recognition process

A robust detection system can yield correct matches when the person is feeling happy or sad.

Deriving the eigenface basis

The eigenface technique is a powerful yet simple solution to the face recognition dilemma. In fact, it is really the most intuitive way to classify a face. As we have shown, old techniques focused on particular features of the face. The eigenface technique uses much more information by classifying faces based on general facial patterns. These patterns include, but are not limited to, the specific features of the face. By using more information, eigenface analysis is naturally more effective than feature-based face recognition.

Eigenfaces are fundamentally nothing more than basis vectors for real faces. This can be related directly to one of the most fundamental concepts in electrical engineering: Fourier analysis. Fourier analysis reveals that a sum of weighted sinusoids at differing frequencies can recompose a signal perfectly! In the same way, a sum of weighted eigenfaces can seamlessly reconstruct a specific person’s face.

Determining what these eigenfaces are is the crux of this technique.

Before finding the eigenfaces, we first need to collect a set of face images. These face images become our database of known faces. We will later determine whether or not an unknown face matches any of these known faces. All face images must be the same size (in pixels), and for our purposes, they must be grayscale, with values ranging from 0 to 255. Each face image is converted into a vector Γ n of length N (N=imagewidth*imageheight). The most useful face sets have multiple images per person. This sharply increases accuracy, due to the increased information available on each known individual. We will call our collection of faces“face space.”This space is of dimension N.

Example images from the rice database

Next we need to calculate the average face in face space. Here M is the number of faces in our set:

Ψ = 1 M n = 1 M Γ n

Average face from rice database

We then compute each face’s difference from the average:

Φ i = Γ i Ψ

We use these differences to compute a covariance matrix (C) for our dataset. The covariance between two sets of data reveals how much the sets correlate.

C = 1 M n = 1 M Φ n Φ n T = 1 M n = 1 M ( var ( p 1 ) cov ( p 1 , p N ) cov ( p N , p 1 ) var ( p N ) ) n = A A T

Where A = [ Φ 1 Φ 2 ... Φ M ] and p i = pixel i of face n.

The eigenfaces that we are looking for are simply the eigenvectors of C. However, since C is of dimension N (the number of pixels in our images), solving for the eigenfaces gets ugly very quickly. Eigenface face recognition would not be possible if we had to do this. This is where the magic behind the eigenface system happens.

Simplifying the initial eigenface basis

Based on a statistical technique known as Principal Component Analysis (PCA), we can reduce the number of eigenvectors for our covariance matrix from N (the number of pixels in our image) to M (the number of images in our dataset). This is huge! In general, PCA is used to describe a large dimensional space with a relative small set of vectors. It is a popular technique for finding patterns in data of high dimension, and is used commonly in both face recognition and image compression.* PCA is applicable to face recognition because face images usually are very similar to each other (relative to images of non-faces) and clearly share the same general pattern and structure.

PCA tells us that since we have only M images, we have only M non-trivial eigenvectors. We can solve for these eigenvectors by taking the eigenvectors of a new M x M matrix:

L = A T A

Because of the following math trick:

A T A v i = μ i v i A A T A v i = μ i A v i

Where v i is an eigenvector of L. From this simple proof we can see that A v i is an eigenvector of C.

The M eigenvectors of L are finally used to form the M eigenvectors u l of C that form our eigenface basis:

u l = k = 1 M v l k Φ k

It turns out that only M-k eigenfaces are actually needed to produce a complete basis for the face space, where k is the number of unique individuals in the set of known faces.

In the end, one can get a decent reconstruction of the image using only a few eigenfaces (M’), where M’usually ranges anywhere from .1M to .2M. These correspond to the vectors with the highest eigenvalues and represent the most variance within face space.

Top ten eigenfaces from rice database

These eigenfaces provide a small yet powerful basis for face space. Using only a weighted sum of these eigenfaces, it is possible to reconstruct each face in the dataset. Yet the main application of eigenfaces, face recognition, takes this one step further.

*For more information on Principal Component Analysis, check out this easy to follow tutorial .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Face recognition using eigenfaces. OpenStax CNX. Dec 21, 2004 Download for free at http://cnx.org/content/col10254/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Face recognition using eigenfaces' conversation and receive update notifications?

Ask