<< Chapter < Page Chapter >> Page >
Once the Eigenfaces are computed, one can use the vectors to detect new faces images into the system.

Overview

Now that one has a collection of eigenface vectors, a question that may arise is, what next? Well, a sighted person can fairly easily recognize a face based on a rough reconstruction of an image using only a limited number of eigenfaces. However, reconstruction of non-face images is not so successful.

Poor non-face reconstruction

I smell a rat, but certaintly not when I reconstruct it with eigenfaces

Given that the initial objective is a face recognition system, eigenfaces happen to be a fairly easy, computationally economical, and successful method to determine if a given face is a known person, a new face, or not a face at all. A set of eigenface vectors can be thought of as linearly independent basis set for the face space. Each vector lives in its own dimension, and a set of M eigenfaces will yield an M dimensional space.

It should also be noted that the eigenfaces represent the principal components of the face set. These principal components are very useful in simplifying the recognition process of a set of data. To make it simpler, suppose we had a set of vectors that represented a person’s weight and height. Projecting a given person onto these vectors would then yield that person’s corresponding weight and height components. Given a database of weight and height components, it would then be quite easy to find the closest matches between the tested person and the set of people in the database.

w p = D o t ( P e r s o n , w e i g h t ¯ ) h p = D o t ( P e r s o n , h e i g h t ¯ )

A similar process is used for face recognition with eigenfaces. First take all the mean subtracted images in the database and project them onto the face space. This is essentially the dot product of each face image with one of the eigenfaces. Combining vectors as matrices, one can get a weight matrix (M*N, N is total number of images in the database)

ω k = μ k ( Γ n e w Ψ )

Ω T = [ ω 1 ω 2 ... ω M ' ]

W e i g h t M a t r i x = ( ω 11 ω 1 n ω m ' 1 ω m ' n )

An incoming image can similarly be projected onto the face space. This will yield a vector in M dimensional space. M again is the number of used eigenfaces. Logically, faces of the same person will map fairly closely to one another in this face space. Recognition is simply a problem of finding the closest database image, or mathematically finding the minimum Euclidean distance between a test point and a database point.

ε k = | | Ω n e w Ω k | | 2

Due to overall similarities in face structure, face pixels follow an overall“face”distribution. A combination of this distribution and principal component analysis allows for a dimensional reduction, where only the first several eigenfaces represent the majority information in the system. The computational complexity becomes extremely reduced, making most computer programs happy. In our system, two techniques were used for image recognition.

Averaging technique

Within a given database, all weight vectors of a like person are averaged together. This creates a "face class" where an even smaller weight matrix represents the general faces of the entire system. When a new image comes in, its weight vector is created by projecting it onto the face space. The face is then matched to the face class that minimizes the euclidean distance. A 'hit' is counted if the image matches correctly its own face class. A 'miss' occurs if the minimum distance matches to a face class of another person. For example, the ATT database has four hundred images total, composed of forty people with ten images each. The averaging technique thus yields a weight matrix with forty vectors (forty distinct face classes).

Removal technique

This procedure varies only slightly from the averaging technique in one key way. The weight matrix represents the image projection vectors for images of the entire database. For empirical results, an image is removed from the system, and then projected onto the face space. The resulting weight vector is then compared to the weight vector of all images. The image is then matched to the face image that minimizes the euclidean distance. A 'hit' is counted if the tested image matches closest to another image of the same person. A 'miss' occurs when the image matches to any image of a different person. The main difference from the average technique is the number of possible images that the test face can match to that will still result in a hit. For the ATT database, a weight matrix with four hundred vectors is used, but a new image could potentially 'hit' to ten distinct faces.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Face recognition using eigenfaces. OpenStax CNX. Dec 21, 2004 Download for free at http://cnx.org/content/col10254/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Face recognition using eigenfaces' conversation and receive update notifications?

Ask