A guide to facial expression analysis using Py-FEAT

0

Facial expression analysis is the automatic detection, collection and analysis of facial muscle movements and changes that reflect certain human mental states and situations. We’ll talk about this technique in this article, along with a Python-based utility called Py-FEAT, which helps identify, preprocess, analyze, and visualize facial expression data. Below are the main points that we will cover in this article.

Contents

  1. Analysis of facial expressions
  2. How does Py-FEAT perform the analysis?
  3. Py-FEAT implementation

Let’s start by understanding facial expression analysis.

Analysis of facial expressions

A facial expression is made up of one or more movements or postures of the muscles under the skin of the face. These movements, according to a controversial set of ideas, communicate an individual’s emotional state to observers. Facial expressions are an example of nonverbal communication. They are the most widespread means by which humans exchange social information, but they are also found in most other mammals and some other species.

Facial expressions can divulge information about an individual’s inner mental state and provide non-verbal channels for interpersonal and interspecies communication. Achieving consensus on how to effectively represent and measure facial expressions was one of the most challenging aspects of their research. The Facial Affect Coding System (FACS) is one of the most widely used techniques for accurately measuring the intensity of facial muscle groupings called action units (AU).

Automated methods based on computer vision techniques have been developed as a potential tool to extract representations of facial emotions from images, videos and depth cameras both indoors and outdoors. outside the laboratory. Participants can be freed from messy cables and engage in tasks such as watching a movie or casually conversing.

Besides AUs, computer vision approaches have introduced alternative spaces to represent facial expressions, such as facial landmarks or lower-dimensional latent representations. These techniques can predict the intensity of emotions and other affective states such as pain, discern genuine and false expressions, detect signs of depression, infer qualities such as personality or political leanings, and anticipate the development of relationships. interpersonal.

How does Py-FEAT perform the analysis?

The Python Facial Expression Analysis Toolbox (Py-Feat), free and open-source software for analyzing facial expression data. Like OpenFace, it provides tools for extracting facial features, but it also includes modules for preprocessing, analyzing, and displaying facial expression data (see the pipeline in the figure below). Py-Feat is intended to cater to unique user types. Py-Feat helps computer vision researchers by allowing them to communicate their state-of-the-art models to a wider audience and quickly compare their models to others.

Face analysis begins with capturing face photographs or videos using a recording device such as webcams, camcorders, head mounted cameras or 360 cameras as shown below. above. After registering the face, Py-Feat is used to detect facial attributes such as facial landmarks, action units and emotions, and the results can be compared using image overlays and graphs to bars.

Additional features can be extracted from the detection data, such as oriented gradient histogram or multi-wavelet decomposition. The data can then be evaluated using statistical methods such as t-tests, regressions, and between-subject correlations in the toolkit.

Face images can be generated from models of action unit activations using visualization tools that display vector fields indicating landmark movements and heat maps of facial muscle activations.

Py-Feat offers a detection module to detect facial expression features (such as faces, facial landmarks, AU activations and emotional expressions) in face images and videos, as well as a data class Fex with face pre-processing, analysis and visualization methods. expression data. In the next section, we’ll see how to get facial expression details for certain movie scenes.

Py-FEAT implementation

Using the Detector class, we will try to detect emotions from various movie scenes in this section. This class takes patterns for

  1. discover a face in an image or video frame.
  2. locate facial landmarks.
  3. detect activations of facial muscle action units, and
  4. detect manifestations of standard emotional emotions as attributes.

These models are modular in nature, allowing users to choose which algorithms to apply for each detection task based on their accuracy and speed requirements. Now let’s start by installing and importing dependencies.

!pip install py-feat
import os
from PIL import Image
import matplotlib.pyplot as plt
 
from feat.tests.utils import get_test_data_path
from feat import Detector

Set the detector class as shown below,

# define the models
face_model = "retinaface"
landmark_model = "mobilenet"
au_model = "rf"
emotion_model = "resmasknet"
detector = Detector(face_model = face_model, landmark_model = landmark_model, au_model = au_model, emotion_model = emotion_model)

Now load the image,

# load and visualize the image
test_image = os.path.join('/content/', "home_alone4.jpg")
f, ax = plt.subplots()
im = Image.open(test_image)
ax.imshow(im)

We can now initialize the detector class by its image-based inference method by detect_image()

# get the prediction

image_prediction = detector.detect_image(test_image)

Now, through this, we can access the different units of action that the model has detected and also the emotions that are inferred by the detector.

In the same way, the method performs not only the inference for the single but also the multiple from image and video files. Examples are included in the notebook.

Last words

This article has explored what facial expression analysis is and how it can be used in a variety of applications. We saw Py-Feat, an open-source full-stack framework written in Python to perform facial expression analysis from detection to pre-processing, analysis, and display. This package lets you write new algorithms to identify faces, facial landmarks, action units, and emotions.

The references

Share.

Comments are closed.