Enhancing 2D Face Recognition Systems Associate Professor of Computer Engineering
Dr. Naoufel Werghi how his Research Leverages Deep Learning to Develop Robust 2D Face Recognition Systems
Face recognition systems are ubiquitous. We use them for security in places like airports, at borders and in venues that manage large volumes of people, like stadiums and theaters. They are also integrated into smartphones as biometric locks, are used to track lost children across areas, and are part of the next generation of targeted marketing, where they scan your face to determine your age and gender to select the appropriate digital ads to show you. The more reliable, accurate and speedy facial recognition systems become, the more ways they can be integrated into sectors to provide enhanced security and convenience.
One of the common face recognition systems is two dimensional (2D) face recognition systems, which is the type we often see in airports. 2D face recognition systems can use computer vision and photometric methods to scan through available photographs of a person’s face, to ‘learn’ how to identify them when they appear before the system’s cameras. But the cutting-edge of this technology has been struggling to meet our growing needs and expectations, particularly facial identification when the face is only seen incompletely, or at a different angle, or under different lighting, or with different facial expressions, or even disguising makeup.
While engineers have been able to develop algorithms that can identify faces in these scenarios in constrained situations, when it comes to real-world use, they have often failed to manage the range of changeable parameters. They particularly struggle to recognize faces when they are not front facing and centered, and the more extreme the angle and pose, the more challenging it is for the system.
That is why I have been working with students and faculty in halifa University and abroad to develop an unconstrained face identification template that can handle all of the challenges of 2D facial recognition in real-life scenarios. We developed a first prototype to recognize faces using 3D facial images. This
modality relies on the facial shape as a main information and therefore is less sensitive to variations in pose and light conditions. Our system has been validated on two public datasets containing more six thousand images, and reached an accuracy above 95% even in the presence of facial expressions.
Building on advances in deep learning we have developed another system that is able to automatically learn facial image registration, which transform a face pose in the image from a lateral view to a frontal view. It is also able to learn a face signature as part of an end-to-end trainable Convolutional Neural Network.
The first part of network is the registration module, which learns from 2.6 million images of 2,622 faces of YouTube celebrities, to ‘understand’ how they can look different from different angles, in different lighting, with different types of makeup, and when wearing different expressions. That provides the system with a baseline understanding that is then enhanced by the second part, the representation module, which is able to learn meaningful feature encoding of input face images. Images of a targeted
face can be uploaded, which it then ‘learns’ and can seek out using the lessons applied from the registration module.
The system we developed performed better than the existing state of the art methods. We ran it through three different types of face image datasets – the IJB-A dataset that contains 5,712 images and
2,085 videos of 500 subjects captured in real life scenarios around the world; the COX dataset that contains 4,000 uncontrolled low resolution video sequences of 1,000 subjects walking in a gymnasium without enforcing any constraints on their facial expressions, lighting conditions and head poses; and the YouTube Celebrities dataset of 1,910 low-resolution face videos of 47 celebrities downloaded from YouTube. We reached a recognition accuracy of 96%, 90% and 97%.
But the part of which I am proudest of is having involved undergraduate student in face recognition research. From 2009 till today, seven face identification projects have been proposed and undertaken by student groups in the Senior Design Project and the Artificial Intelligence course in which I participate.
My most recent group of students – Mohamed Khalid Almansoori, Ali Alshkeili, Abdullah Alenezi, and Eissa Alromaithi – are currently working on a face identification system using a simple 2D camera that can authenticate an individual or detect a suspect. In the first mode the user identifies himself by entering a pin code or swiping an ID card. The system captures the face image of the user, compares the input image with the reference image stored in the system and decides whether or not the user corresponds to the identity that they claim to be. This is the kind of authentication system currently used in Abu Dhabi Airports at the passport check gates.
In the second mode, the system detects faces in a scene and tries to find the face that correspond to a targeted face. If the targeted individual is found, an alarm will be then triggered, signaling the presence of that suspect. The second mode is the most challenging, as the camera has to scan faces from various
angles and in different light conditions. We recently featured this project at Dubai’s annual Water, Energy, Technology, and Environment Exhibition 2018 (WETEX).
My research and the project led by my students, both aim to enhance the UAE’s expertise in the growing field of face recognition systems. The global facial recognition technology market is expected to exceed $9.6 billion by 2023, making it a valuable market in which to develop intellectual and human
Dr. Naoufel Werghi is Associate Professor of Computer Engineering at the Khalifa University of Science