2dix-The Student Choice
Log in Register now

Research paper on illumination Cones for Recognition Under Variable Lighting: Faces

1 Introduction
2 The illumination Cone
3 Construction the Illumination Cone
3.1 Estimating B*
3.2 Enforcing Inte grability
3.3 Generating a GBR surface
4 Recognition
5 Experimental Results
6 Discussion
7 References

                       An object’s appearance depends in large part on the way in which it is viewed. Often slight changes in pose and illumination produce large changes in an object,‘ s appearance. While there has been a great deal of literature in computer vision detailing methods for handling image variation produced by changes in pose, few efforts have been devoted to image variation produced by changes in illumination. For the most part, object recognition algorithms have either ignored illumination variation, or dealt with it by measuring some property or feature of the image ~ e.g., edges or corners - which is, if not invariant, at least insensitive to this variability. Yet, edges and corners do not contain all of the information useful for recognition. Furthermore, objects which are not simple polyhedral or are not composed of piece-wise constant Alberto patterns often produce inconsistent edge and corner maps. Methods have recently been introduced which use low-dimensional representations of images of objects to perform recognition, see for example [5, 11, 161. These methods, often termed appearance-based methods, differ from the feature-based methods mentioned above in that their low-dimensional representation is, in a least-squared sense, faithful to the original image. Systems such as SLAM [ll] and Eigenvalues [16] have
demonstrated the power of appearance-based methods both in ease of implementation and in accuracy. Yet these methods suffer from an important drawback: recognition of an object (or face) under a particular pose and lighting can be performed reliably provided that object has been previously seen under similar circumstances. In other words, these methods in their original form have no way of extrapolating to novel viewing conditions.
                                                                            The "illumination cone” method of [3] is, in spirit, an appearance-based method for recognizing objects under ext, mere variability in illumination. However, the method differs substantially from previous methods in that a small number of images of each object under small changes in lighting is used to generate a representation, the illumination cone, of all images of the object (in fixed pose) under all variation in illumination. This paper focuses on issues for building the illumination cone representation from training images
and using it for recognition. While the structure of the set of images under variable illumination was characterized in [3] and the relevant results are summarized in Sec. 2, no methods for performing recognition were presented. In this paper, such recognition algorithms are introduced. Furthermore,
the cone representation is extended to explicitly model cast shadows produced by objects which have non-convex shapes. This extension is non-trivial, requiring that the surface normal's for the objects be recovered up to a shadow preserving generalized baserelief (GBR) transformation. The effectiveness of these algorithms and the cone representation are validated within the context of face recognition ~ it ha,s been observed by Moses, Adini and Ullman that the variability in an image due to illumination is often greater than that due to a change in the person’s identity [lo]. Figure 1 shows the variability for a single individual. It has been observed that methods for face recognition based on finding local image features and using their geometric relation are generally ineffective [4]. Hence, faces provide an interesting and useful class of objects for testing the
power of the illumination cone representation.

Generating a GBR surface:
                                            The preceding sections give a method for estimating the matrix B* and then enforcing intelligibility  we now reconstruct the corresponding surface Z(Z, y). Note that i(~, y) is not a Euclidean reconstruction of the face, but a representative element of the orbit under a GBR transformation. Recall that both shading and shadowing will be correct for images synthesized from a transformed surface. To find J?(x, y), we use the variation approach presented in [7]. Then, it is a simple matter to construct an illumination cone representation that incorporates cast shadows. Using ray-tracing techniques for a given light source direction, we can determine the cast shadow regions and correct the extreme rays
of c*. Figure 2 demonstrates the process of constructing the cone C’. Figure 2.a shows the training images for one individual in the database. Figure 2.b shows the columns of the matrix B*. Figure 2.c shows the reconstruction of the surface up to a GBR transformation. The left column of Fig. 2.d shows sample images in the database; the middle column shows the closest image in the illumination cone without cast shadows; and the right column shows the closest, image in the illumination cone with cast shadows.

                      The cone C’ can be used in a natural way for face recognition, and in experiments described below, we compare three recognition algorithms to the proposed method. From a set of face images labeled with the person’s identity (the learning set) and an unlabeled set of face images from the same group of people (the test set), each algorithm is used to identify the person in the test images. For more details of the comparison algorithms, see [2]. We assume that the face has been located and aligned within the image. The simplest recognition scheme is a nearest neighbor classifier in the image space [4]. An image in the test set is recognized (classified) by assigning to it the label of the closest point in the learning set, where distances are measured in the image space. If all of the images are normalized to have zero mean and unit variance, this procedure is equivalent to choosing the image in the learning set that best correlates with the test image. Because of the normalization process, the result is independent of light source intensity. As correlation methods are computationally expensive and require great amounts of storage, it is natural to pursue dimensional reduction schemes. A technique now commonly used in computer vision - particularly in face recognition - is principal components analysis (PCA) which is popularly known as Eigenvalues [5, 11, 9, 161. Given a collection of training images xi E IR”, a linear projection of each image
yi = Wxi to an f-dimensional feature space is performed.

                                                                 A face in a test image x is recognized by Figure 2: The figure demonstrates the process of constructing
the cone C’. a) the training images. b) matrix B*. c) reconstruction up to a GBR transformation. d) sample images from database (left column);
closest image in illumination cone without cast shadows (middle column); and closest image in illumination cone with cast shadows (right column).
projecting x into the feature space, and nearest neighbor classification is performed in IRf. The projection matrix W is chosen to maximize the scatter of all pro-56 jetted samples. It has been shown that when f equals the number of training images, the Eigenvalue and Correlation methods are equivalent (See [2, 111). One proposed method for handling illumination variation in PCA is to discard from I/I; the three most significant principal components; in practice, this yields better recognition performance [2]. A third approach is to model the illumination variation of each face as a three-dimensional linear subspace C as described in Section 2. To perform recognition, we simply compute the distance of the test image to each linear subspace and choose the face corresponding to the shortest distance. We call this recognition scheme the Linear Subspace method [l]; it is a variant of the photo-metric alignment method proposed in [13] and is related to [6, 121. While this models the variation in intensity when the surface is completely illuminated, it does not model shadowing. Finally, given a test image x: recognition using il- Rumination cones is performed by first computing the distance of the test, image to each cone: and then choosing the face that corresponds to the shortest distance. Since each cone is convex, the dist,ance can be found by solving a convex optimization problem. In particular, t,he non-negative linear least squares technique contained in matlab was used in our implementation, and this algorithm has computational complexity (I(71 es) where n is the number of pixels and e is the number of extreme rays.

comments (2)