Robot Face Recognition

Robot face recognition is becoming part and parcel to robots...

Robot Face Recognition. Some years ago the Manchester airport, as well as other airports around the world, had special machines installed in their terminals, something many quite confidently considered was the future of immigration.

Apparently, airport executives had decided that passport checks by human employees were old-fashioned and inefficient, and what airports really needed are intricate equipment and complicated software to do the same job. Enter facial recognition, an amazing technology that’s getting more and more advanced by the second.

Humans have always had the innate ability to recognize and distinguish between faces, and thanks to modern science, computers have proven capable of this same ability, as well.

Scientists began working on robot face recognition in computers in the mid 60’s, where administrators had to locate features on photographs before it calculated distances and ratios to a common reference point, which were then compared to reference data. Over the last ten years or so, face recognition has become a popular area of research in computer vision as well as one of the most successful applications of image analysis and understanding.

A robot face recognition system is a computer application used to automatically identify or verify a person from a digital image or a video frame from a video source. This is usually achieved through the comparison of selected facial features from the image and a facial database. Typically used in security systems and comparable to other biometrics like fingerprint or eye iris recognition systems, facial recognition software is based on the ability to recognize a face by measuring the various features of the face.

We don’t often notice, but faces have numerous, distinguishable landmarks. Each of our faces have different peaks and valleys that make up our specific facial features. These are landmarks that are defined as nodal points, and every person approximately has 80 of them. Some nodal points measurable by face recognition software are:

  • Distance between the eyes
  • Width of the nose
  • Depth of the eye sockets
  • The shape of the cheekbones
  • The length of the jaw line

These nodal points are measured creating a numerical code which is called a faceprint, which then represents the face in a database. Older facial recognition software used to rely on a 2D image to compare or identify another 2D image from the database, which meant that in order to ensure complete accuracy, the image capture of the face needed to be looking almost directly at the camera, with little variation in light or facial expression from the image in the database.

This, obviously, created quite a problem, as most images cannot be taken in a controlled environment. The Tampa Police Department, for example, installed police cameras equipped with facial recognition technology in an attempt to cut down on crime. But since the camera could not get a clear enough shot to identify anyone, it was very quickly scrapped due to ineffectiveness.

Now, however, most facial recognition software uses a 3D model, which claims to provide more accuracy as it captures a real-time 3D image of a person's facial surface. Using distinctive features of the face, more attention is given to tissue and bone structure (e.g. curves of the eye socket, nose and chin) to identify the subject. And because these are all unique areas, they don't change over time, so differences in factors like lighting and face angle do not affect the identification process.

For hobbyists, there are also several different robot face recognition software packages available. A popular technique is the Haar cascade, which is reportedly the easiest and most widespread hobbyist method for spotting faces. It’s said to work somehow like a spam filter, where one requires ‘spam’ pictures to train it, so that it soon learns what a face looks like.

Then there is also the OpenCV (Open Source Computer Vision), which is a library of programming functions for real time computer vision. It is free for both academic and commercial use and was originally written in C but has a full C++ interface. The library has more than 2000 optimized algorithms which is used around the world, with over 2 million downloads and 40,000 people in the user group.

Some well known developers of robot face recognition technology are Robin Hewitt and the makers of the LEAF A.I. Robot. Hewitt is in the midst of developing a face-learning method for social robots, where the robot teaches itself to recognize users' faces. In his work, a user initiates the learning process by showing the robot one example image of his or her face and then marking where the eyes and nose are. The robot learns an initial representation of the face from these inputs, which is good enough for the robot to recognize that user fairly often and mostly avoid false detections.

To work better, the robot must keep the user interested so that he or she will keep interacting with the robot. As this interaction continues, the robot searches for the user's face in its video stream. Once it detects that face, it tracks it with a face tracker. During this process, the robot captures and stores short video segments of this user. Later, perhaps while the robot is sleeping at its charging station, it analyzes these video segments and teaches itself to recognize that user better. From the video clips, it can gather examples of different facial expressions and head positions that are typical for this user, thus “teaching” the robot to recognize.

The LEAF A.I. robot vision, on the other hand, is already capable of image recognition technology, including motion detection, intensity change detection, and color blob detection (largely due to RoboRealm and OpenCV). It does seem very likely that robots can soon differentiate and identify people without error. I wonder if robots, too, will one day learn to identify and differentiate themselves. Then robot face recognition will come full circle.

Robot Face Recognition with Open CV