Carnegie Mellon University
November 02, 2016

Researchers create tool that can predict what you look like solely based on your eyes

When a criminal’s face is caught on camera, law enforcement has a huge advantage. This is an obvious reason that many criminals wear masks, covering everything except their eyes. However, ongoing work in the CyLab Biometrics Center has shown that a person’s face can be “hallucinated” based solely on a person’s eye-region.

“Law enforcement would love to have these types of tools to use in in challenging crime cases,” says Marios Savvides, Director of the CyLab Biometrics Center and a professor of Electrical and Computer Engineering. “This would fill a huge technology gap in this area.”

Savvides and his student Felix Juefei Xu, a Ph.D. student in the department of Electrical and Computer Engineering, authored a study on this topic that was just awarded “Best Student Paper” at the IEEE 8thInternational Conference on Biometrics: Theory, Applications and Systems.

The tool uses a machine learning algorithm that sweeps through millions of different face images and learns correlations between the eye region and the full face.

“When only the eye region is visible to us, the algorithm tries to recover the missing pixels that are not in the image,” says Xu. “The algorithm learns a mapping from a partial visible region and reconstructs a full face lookalike.”

In the study, the researchers test the algorithm by manually cropping only the eye-region from a series of face images, reconstruct the full-face, and then compare with the original image.

“The algorithm is able to reconstruct the full-face image at high fidelity – higher than ever before,” says Xu. “It is higher fidelity because of a new method of clustering.”

The “clustering” Xu mentions refers to the process by which the machine learning algorithm “learns.” In this algorithm, pixels from millions of images are converted into data points and clustered into groups based on similarities. In previous versions of the algorithm, these “clusters” were imperfect, however, with some data points being quite different from the rest in their cluster. Xu’s new clustering technique results in fewer outliers in each cluster, leading to a more accurate image reconstruction.

“This improved clustering creates a better quality tool with less noise,” Xu says.

In the future, Xu and Savvides aim to improve the tool’s robustness. Currently, the tool only works well when the faces in the image are directly facing the camera, and when the occluded part of the face is carefully identified.

“The next place we could go forward is in deep learning,” Xu says. “Deep learning is able to deal with more variations in the face. You don’t need perfectly aligned faces like our algorithm does.”