face detection

17
http://www.mathworks.com/matlabcentral/fileexchange/ 44866-face-detection-system-for-matlab-2013a http://www.mathworks.com/help/vision/examples/face- detection-and-tracking-using-the-klt-algorithm.html http://www.mathworks.com/discovery/face-recognition.html Face detection is the first and foremost step in any automated face recognition system. Its reliability greatly affects the performance and usability of the whole system. Given a single image or a video frame, an ideal face detector should have the ability to locate all the present faces inside that image, regardless of their position, facial gestures, variations in scale and orientation. Furthermore it should be robust against variation in illumination, skin color or background … Several clues may facilitate the detection process. Skin color (for detecting faces in colourful images and videos) is one that often can be used. Motion (for detecting faces in video) is another well-known clue that can be estimated by analyzing several video frames in a row. But the hardest kind of detection is face detector in grey-level still images, in which there is no cue of any type such as color or motion. The processing is usually done as follows: Step One: Feature Extraction Function You need a function that can transform a small patch of image into a vector. If you only

Transcript of face detection

http://www.mathworks.com/matlabcentral/fileexchange/44866-face-detection-system-for-matlab-2013a

http://www.mathworks.com/help/vision/examples/face-detection-and-tracking-using-the-klt-algorithm.html

http://www.mathworks.com/discovery/face-recognition.html

Face detection is the first and foremost step in any automated face recognition system. Its reliability greatly affects the performance and usability of the whole system.

Given a single image or a video frame, an ideal face detector should have the ability to locate all the present faces inside that image, regardless of their position, facial gestures, variations in scale and orientation. Furthermore it should be robust against variation in illumination, skin color or background …

Several clues may facilitate the detection process. Skin color (for detecting faces in colourful images and videos) is one that often can be used. Motion (for detecting faces in video) is another well-known clue that can be estimated by analyzing several video frames in a row. But the hardest kind of detection is face detector in grey-level still images, in which there is no cue of any type such as color or motion.

The processing is usually done as follows:

Step One: Feature Extraction Function You need a function that can transform a small patch of image into a vector. If you only

reshape the 2D patch into a 1D vector, it is still a function and correct. But in practiceit contains several stages. This function is the feature extraction function and should extract features in a wise manner, followed by a normalization. In Face Detection System for MATLAB, Gabor features are extracted fromthe patch and you will learn all about it in the guide that you are about to download.

Step Two: Generating Data for Training The next thing that you have to do is to crop some images with the same height and width. The height and width is the same size as the patch that we talked about in the previous step. Crop them using any editing software that you have; Paint, Gimp or PhotoShop.Then you use the feature extraction function that you already have and generate a vector for each image. You should store each vector in amatrix, so that it can be used later for the training of the classifier. Not only you needthe data for faces, you also need data for patches that does not contain any face. Sometimes gathering these information is a challenge because you don’t really know what is not a face. This problem is discussed in detail in the guide as well as how to read all the files and store them in the matrix.

Step Three: Train The Classifier: Every detection system needs a classifier that looks at your vector and decides if it is thedeal or not. In case of face detection, the classifier looks for faces. The main issues are to choose your classifier and set the parameters in a way that you get reasonable results. Face Detection System for MATLAB

uses Neural Network as its classifier. Everything regarding how to generate the network and train it is discussed inside the guide in detail.

Step Four: Scan a Picture Then an image is scanned at all possible locations [and scales] by a sub-window (patch). Each patch is fed to the feature extraction function andthe output vector goes to the classifier. There are ways to pre-select possible locations and how to pin point the location of the faces.

Example:Introduction

Object detection and tracking are important in many computer vision applications including activity recognition, automotive safety, and surveillance. In thisexample, you will develop a simple face tracking system by dividing the tracking problem into three parts:

1. Detect a face2. Identify facial features to track3. Track the face

Detect a Face

First, you must detect the face. Use the vision.CascadeObjectDetector System object™ to detect the location of a face in a video frame. The cascade object detector uses the Viola-Jones detection algorithm and a trained classification model for detection. By default, the detector is configured to detect faces, but it can beused to detect other types of objects.

% Create a cascade detector object.faceDetector = vision.CascadeObjectDetector();

% Read a video frame and run the face detector.

videoFileReader = vision.VideoFileReader('tilted_face.avi');videoFrame = step(videoFileReader);bbox = step(faceDetector, videoFrame);

% Draw the returned bounding box around the detected face.videoFrame = insertShape(videoFrame, 'Rectangle', bbox);figure; imshow(videoFrame); title('Detected face');

% Convert the first box into a list of 4 points% This is needed to be able to visualize the rotation of the object.bboxPoints = bbox2points(bbox(1, :));

To track the face over time, this example uses the Kanade-Lucas-Tomasi (KLT) algorithm. While it is possibleto use the cascade object detector on every frame, it is computationally expensive. It may also fail to detect theface, when the subject turns or tilts his head. This limitation comes from the type of trained classification model used for detection. The example detects the face only once, and then the KLT algorithm tracks the face across the video frames.

Identify Facial Features To Track

The KLT algorithm tracks a set of feature points across the video frames. Once the detection locates the face, the next step in the example identifies feature points that can be reliably tracked. This example uses the standard, "good features to track" proposed by Shi and Tomasi.

% Detect feature points in the face region.points = detectMinEigenFeatures(rgb2gray(videoFrame), 'ROI', bbox);

% Display the detected points.figure, imshow(videoFrame), hold on, title('Detected features');plot(points);

Initialize a Tracker to Track the Points

With the feature points identified, you can now use the vision.PointTracker System object to track them. For each point in the previous frame, the point tracker attempts

to find the corresponding point in the current frame. Then the estimateGeometricTransform function is used to estimate the translation, rotation, and scale between theold points and the new points. This transformation is applied to the bounding box around the face.

% Create a point tracker and enable the bidirectional error constraint to% make it more robust in the presence of noise and clutter.pointTracker = vision.PointTracker('MaxBidirectionalError', 2);

% Initialize the tracker with the initial point locations and the initial% video frame.points = points.Location;initialize(pointTracker, points, videoFrame);

Initialize a Video Player to Display the Results

Create a video player object for displaying video frames.

videoPlayer = vision.VideoPlayer('Position',... [100 100 [size(videoFrame, 2), size(videoFrame, 1)]+30]);

Track the Face

Track the points from frame to frame, and use estimateGeometricTransform function to estimate the motion of the face.

% Make a copy of the points to be used for computing the geometric% transformation between the points in the previous and the current framesoldPoints = points;

while ~isDone(videoFileReader) % get the next frame videoFrame = step(videoFileReader);

% Track the points. Note that some points may be lost. [points, isFound] = step(pointTracker, videoFrame); visiblePoints = points(isFound, :);

oldInliers = oldPoints(isFound, :);

if size(visiblePoints, 1) >= 2 % need at least 2 points

% Estimate the geometric transformation between the old points % and the new points and eliminate outliers [xform, oldInliers, visiblePoints] = estimateGeometricTransform(... oldInliers, visiblePoints, 'similarity', 'MaxDistance', 4);

% Apply the transformation to the bounding box points bboxPoints = transformPointsForward(xform, bboxPoints);

% Insert a bounding box around the object being tracked bboxPolygon = reshape(bboxPoints', 1, []); videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon,... 'LineWidth', 2);

% Display tracked points videoFrame = insertMarker(videoFrame, visiblePoints, '+', ... 'Color', 'white');

% Reset the points oldPoints = visiblePoints; setPoints(pointTracker, oldPoints); end

% Display the annotated video frame using the video player object step(videoPlayer, videoFrame);end

% Clean uprelease(videoFileReader);release(videoPlayer);release(pointTracker);

Summary

In this example, you created a simple face tracking system that automatically detects and tracks a single face. Try changing the input video, and see if you are still able to detect and track a face. Make sure the person is facing the camera in the initial frame for the detection step.

FACE DETECTION - MATLAB CODE : Lets see how to detect face, nose, mouth and eyes using the MATLAB built-in class and function. Based on Viola-Jones face detection algorithm, the computer vision system toolbox contains vision.CascadeObjectDetector System object which detects objects based on above mentioned algorithm.  

Prerequisite: Computer vision system toolbox

FACE DETECTION:

clear all

clc

%Detect objects using Viola-Jones Algorithm

%To detect Face

FDetect = vision.CascadeObjectDetector;

%Read the input image

I = imread('HarryPotter.jpg');

%Returns Bounding Box values based on number of objects

BB = step(FDetect,I);

figure,

imshow(I); hold on

for i = 1:size(BB,1)

    rectangle('Position',BB(i,:),'LineWidth',5,'LineStyle','-','EdgeColor','r');

end

title('Face Detection');

hold off;

The step(Detector,I) returns Bounding Box value that contains [x,y,Height,Width] of the objects of interest.

BB =

    52    38    73    73

   379    84    71    71

   198    57    72    72

NOSE DETECTION:

%To detect Nose

NoseDetect = vision.CascadeObjectDetector('Nose','MergeThreshold',16);

BB=step(NoseDetect,I);

figure,

imshow(I); hold on

for i = 1:size(BB,1)

    rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','b');

end

title('Nose Detection');

hold off;

EXPLANATION:

To denote the object of interest as 'nose', the argument'Nose' is passed.

vision.CascadeObjectDetector('Nose','MergeThreshold',16);

The default syntax for Nose detection :

vision.CascadeObjectDetector('Nose');

Based on the input image, we can modify the default values of the parameters passed to vision.CascaseObjectDetector. Here the default value for 'MergeThreshold' is 4.

When default value for 'MergeThreshold' is used, the result isnot correct.

Here there are more than one detection on Hermione.

To avoid multiple detection around an object, the 'MergeThreshold' value can be overridden. 

MOUTH DETECTION:

%To detect Mouth

MouthDetect = vision.CascadeObjectDetector('Mouth','MergeThreshold',16);

BB=step(MouthDetect,I);

figure,

imshow(I); hold on

for i = 1:size(BB,1)

 rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','r');

end

title('Mouth Detection');

hold off;

EYE DETECTION:

%To detect Eyes

EyeDetect = vision.CascadeObjectDetector('EyePairBig');

%Read the input Image

I = imread('harry_potter.jpg');

BB=step(EyeDetect,I);

figure,imshow(I);

rectangle('Position',BB,'LineWidth',4,'LineStyle','-','EdgeColor','b');

title('Eyes Detection');

Eyes=imcrop(I,BB);

figure,imshow(Eyes);

I will discuss more about object detection and how to train detectors to identify object of our interest in my upcoming posts. Keep reading for updates.