Watch this technical overview of the Exadel CompreFace solution and stay tuned for another episode about CompreFace.
Exadel CompreFace: Face Recognition Architecture
Let’s look closer at face recognition architecture with CompreFace. CompreFace has several servers. When you start working with CompreFace, you have a user-friendly UI. At the beginning, you set up the environment there, create face recognition services, and get the API keys. UI is deployed on a Balancer server and it uses an UI Admin Server as backend. All the data is saved in our Database, and when you finish getting ready, you can start using the API keys for face recognition. To provide face recognition, CompreFace has API Servers and Deep Learning Servers. Deep Learning Servers use neural networks to calculate embeddings, and API servers are used for classifications. Both API Servers and Deep Learning Servers are scalable, so you can run several instances to improve performance. On top of all that is the Balancer that routes all your requests. In our default configuration, we use nginx as a Balancer. It is possible to use any Balancer server, it may be especially useful if you use Kubernetes.
How Does Facial Recognition Work with CompreFace?
CompreFace is meant to be part of your facial recognition project. To start your facial recognition development, you will need some video or photo capturing hardware. If you have a video, you will need to split it into frames and send them to CompreFace. All the integration is done by using CompreFace REST API. Of course, while recognizing the faces, keep in mind that you need to run business logic on the facial recognition process. But in short, CompreFace’s facial recognition works by using REST API to integrate with CompreFace.
What are the technologies behind facial recognition with CompreFace?
As mentioned above, we have different servers and use different technologies to run them. We use Python and Java on our backend servers, and we use a variety of Machine Learning libraries like mxnet and TensorFlow. For UI, we use Angular and NGRX. We store everything in PostgreSQL, and run the solution in Docker.
To complement these technologies behind facial recognition, we used several algorithms. When we started creating CompreFace, we started with FaceNet, which is a unified embedding for face recognition and clustering. It is a very popular algorithm and an open-source library. We use the same ideas as the ones behind FaceNet; we calculate embeddings, classify them, and find distances between the embeddings. At first, we used Multi-task Cascaded Convolutional Neural Networks for joint face detection and alignment.
But then we decided to find more state-of-the-art technologies. We found a face recognition library that uses RetinaFace (Single-Stage Dense Face Localization in the Wild) for face detection and Sub-center ArcFace (Boosting Face Recognition by Large-Scale Noisy Web Faces) for face recognition.