CompreFace 1.2 Release: What's New and Improved

Exadel AI Team Tech Insights September 14, 2023 13 min read

We are excited to announce the release of CompreFace 1.2, which offers a number of optimizations and new features to further improve the reliability, flexibility, and performance of face recognition systems built with CompreFace. In this post, we will highlight some of the key updates in this new version.

Support of new Generation of Nvidia GPU

We have updated CUDA to 11.8, which supports all GPUs with Compute Capability 3.5 to 9.0. This adds support for the new generation of Nvidia GPUs, the Ada Lovelace and Hopper microarchitectures.

Here is the list of the latest GPUs supported by CUDA 11.8:

  1. Desktop/Laptop: GeForce RTX 4090, RTX 4080, RTX 4070 Ti, RTX 4070, RTX 4060 Ti, RTX 4060
  2. Professional: RTX 6000 Ada, RTX 4000 SFF
  3. Server: L4, L40, H100

To see the list of GPUs supported by older CompreFace versions, visit our previous blog post.

Add support of Pose Plugin on UI

In CompreFace 1.1, we added Pose Plugin, which allows you to detect the pose of a face in an image. In CompreFace 1.2, we have added support for Pose Plugin to the UI, making it easier to test.

To see the result, upload a photo in any of the services and click the Pose Plugin icon. You will see yaw, pitch, and roll vectors on each face. These vectors indicate the rotation of the face in three dimensions.

Added ability to skip detection step in the recognize faces endpoint

Face recognition consists of two steps: finding faces on the image and then recognizing all found faces. Both steps use neural networks and are computationally heavy. Sometimes it makes sense to skip the face detection step to optimize the system. For example, if you have a security camera with face detection, you can skip the face detection step in CompreFace. This can save you time and resources, especially if you are using CompreFace to process a large number of images or videos.

Another practical example is an attendance system at the entrance of a building or facility. The camera points at the door, and you want to track employees who enter the facility. It takes several seconds for an employee to cross the threshold. During this time, you gather hundreds of frames of the person. Recognizing each frame will require considerable computational resources. Even more, trying to recognize bad quality frames may lead to poor accuracy. To increase the accuracy and reliability of the system, it makes sense to recognize only quality pictures with faces looking straight at the camera when the person is close to the camera.

Here is how you can optimize the system:

  1. Use a face detection service instead of the face recognition service on the frames. Enable the Pose Plugin by setting the `face_plugins=pose` parameter.
  2. Find the best frame by defining:
    1. If the person is near the camera using the square of the bounding box, by analyzing the `box` response field
    2. If the person looks at the camera, by analyzing the `pose` response field
  3. Use the `box` response field to crop the best frame face
  4. Send it to the recognition endpoint with the `detect_faces=false` parameter

The added ability to send embeddings instead of the image for recognition

In CompreFace 1.2, we have added a new feature that allows you to skip both neural network executions in the face recognition system. This is done by using the result of the calculator plugin to receive an embedding that can be reused in face recognition.

Here are two examples of how this can be used:

Example 1. You have two face collections: one with employees and one with guests. You want to first try to recognize the person among employees, and then among guests. Here is how you can optimize this process:

  1. Use the “employees” face recognition service with the `face_plugins=calculator` parameter.
  2. Use the `embedding` field from the response body to send the request to the new REST endpoint of the “guests” face recognition service.

Example 2. You have several facilities far away from each other. It is better to process video streams locally in each facility to optimize network traffic. You install a CompreFace instance in each facility that is responsible only for calculating embeddings. You also have one CompreFace instance on the main server that is responsible for recognizing people. In this case, you send only embeddings to the main server, which is much lighter than a video stream. The main server does not run neural networks at all and can handle a much larger throughput. Here is how you can optimize this process:

  1. Send a request to the facility CompreFace instance face detection service with the `face_plugins=calculator` parameter.
  2. Use the `embedding` field from the response body to send the request to the new REST endpoint of the main CompreFace instance face recognition service. The new REST endpoint can process several embeddings in one request, so you can combine several embeddings into one request to optimize it even more.

Add to .env file “max_detect_size” option

This feature is also intended to optimize the face recognition system in many cases. Face detection is a compute-heavy operation, especially on large images, but it can work well enough with low-quality pictures. However, face recognition works better on good-quality pictures.

To understand the problem better, let’s take an example of an automated check-in face recognition system:

We install a tablet on the wall and ask guests to come closer to it to recognize them. Assume the tablet has a high-resolution camera and takes pictures with 1600×1200 pixels. We start recognition only when the guest’s face is almost the full size of the frame to increase the accuracy. Next, we take a high-resolution photo and send it to CompreFace to recognize. Face detection can detect the face even on a low-resolution image, e.g., 320×240, but we should preserve the quality for face recognition.

To solve this problem, we introduced a new option, `max_detect_size` in the .env file. In our example, you can set it to 320. CompreFace then reduces the size of the image for detection and then crops the face from the original image to ensure that face recognition will use the image with the best quality.

The default value for this option in CompreFace 1.2 is 640, and it was hard-coded to the same value before the 1.2 version.

So, here is a simple rule of how to choose this variable:

  1. Decrease this value if you expect the person to come close to the camera.
  2. Set it to the original image size if you need to recognize people from far away.

User password recovery

CompreFace has a comprehensive user access system that makes it more secure. You can define which users have access to which applications. However, some users have encountered the issue of forgetting their password. To address this issue, we implemented a user password recovery functionality.

CompreFace is a self-hosted solution that can work without an internet connection. All data is stored locally, including user logins. This is why password recovery functionality will only work if you connect your email server to CompreFace.

To do this, you need to set five variables in the .env file:

enable_email_server: Set this to true to enable email server integration.
email_host: The SMTP hostname of your email server.
email_username: The username of your email account.
email_password: The password of your email account. The approach may differ; for example, for Google, you must specify here app password instead of your account password.
email_from: The email address that will be used to send password recovery emails.

Once you have set these variables, you need to restart CompreFace. Then you will be able to reset your password by clicking on the “Forgot Password” link on the login page.

Add the status page on CompreFace startup

CompreFace simplifies implementing face recognition systems. However, it is still a complex system and takes time to start up. It is important to wait until the whole system has started so you don’t get errors. To make this more clear and simple, we’ve added a status page to show users when they open the CompreFace UI.

Another benefit of this feature is that if for some reason the server doesn’t start up, it simplifies the process of finding what part of the system has a glitch and why startup failed.

Update result face sorting

Previously, when an image contained several faces, we returned a list of found faces sorted by the probability that this was a face. However, in many cases, it makes sense to recognize only the biggest face in the image.
In CompreFace 1.2, we have updated the way that we sort faces in the results. We now sort faces by the size of the face. This means that you can safely set the `limit` parameter to 1 and be sure that we recognize only the biggest face. This will reduce the server load.

Model loads during container startup

Previously, you might have noticed that the first few requests to CompreFace were much slower than usual. This is because the model loaded during the first request. We have updated this mechanism so that the model is now loaded when CompreFace starts up. This means that even the first request will be fast.

Publish CompreFace to Azure Marketplace

We are happy to announce that CompreFace is available on Azure Marketplace. This means that users can now install CompreFace on Azure with just a few clicks.

Azure Marketplace is a curated catalog of software solutions that have been tested and certified to run on Azure. This makes it easy for users to find and deploy the software they need without having to worry about compatibility or configuration.

Performance optimization and memory leak fixes

You might encounter performance problems and memory leaks when sending images of different sizes. Mostly, in production, we expect that users send images of the same size, as a video stream or images from the camera will be the same size. The problem affected the default and all custom builds. The reason was a bug in the face detection libraries we used. After the bug fix, CompreFace 1.2 should work up to 4 times faster in such cases than the 1.1 version.

We’re excited about these changes and hope that you are, too!

Visit this page to find out more about CompreFace.

Was this article useful for you?

Get in the know with our publications, including the latest expert blogs