Skip to content

This projects main goal is to have a simple library to project information into faces while retaining the facial structure.

License

Notifications You must be signed in to change notification settings

cvjena/face-projection

Repository files navigation

face-projection

Teaser

Are you looking for a powerful, user-friendly tool to project information onto faces while maintaining their natural structure? Look no further – face-projection is here to simplify how you visualize facial data. The tool is open-source, easy to use, and written in Python, allowing you to easily integrate it into your existing workflow. We try to keep the dependencies to a minimum so you can use them in your projects without worrying about compatibility issues. However, we do not guarantee perfect anatomical correctness of the results but try to keep the distortion to a minimum.

All people shown here in the repository examples were generated with StableDiffusion (Prompt: A professional portrait photo of a person directly looking <emotion> at the camera, white background, photoshop, Instagram)

Installation

The tool is available on PyPI.

pip install face-projection

Usage

The tool reduces the overhead of projecting information onto faces to a minimum. Load the data and face, project it, and you are done. You must only ensure the data is inside the canonical face model (see electromyogram for an example). The head pose can be arbitrary, but the face must be visible.

import face_projection as fp
from PIL import Image

image_face = np.asarray(Image.open("face.jpg").convert("RGB"))
image_data = np.asarray(Image.open("data.jpg").convert("RGB"))

warper = fp.Warper()
warped = warper.apply(image_face, image_data, beta=0.2)

We automatically detect the face in the image, compute the landmarks based on the Blaze model, and warp the data onto it. You can decide how much of the underlying face should be visible by adjusting the beta parameter.

In examples/ you can find a more detailed example, which generates the teaser image.

Future Work

We have many ideas for future work, but we are also happy to hear your suggestions. The following todos are in no particular order but should give you an idea of what we are planning.

  • The user can provide a mask to turn off certain parts of the projection
  • Different default warpings based on the six default Ekman emotions
  • More face models
  • Custom face models
  • Upgrade to the newest mediapipe version to avoid MacOS build issues

Citation

If you use our work, please cite our paper:

Büchner, T., Sickert, S., Graßme, R., Anders, C., Guntinas-Lichius, O., Denzler, J. (2023). Using 2D and 3D Face Representations to Generate Comprehensive Facial Electromyography Intensity Maps. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2023. Lecture Notes in Computer Science, vol 14362. Springer, Cham. https://doi.org/10.1007/978-3-031-47966-3_11

or as bib entry

@InProceedings{10.1007/978-3-031-47966-3_11,
    author="B{\"u}chner, Tim and Sickert, Sven and Gra{\ss}me, Roland and Anders, Christoph and Guntinas-Lichius, Orlando and Denzler, Joachim",
    title="Using 2D and 3D Face Representations to Generate Comprehensive Facial Electromyography Intensity Maps",
    booktitle="Advances in Visual Computing",
    year="2023",
    publisher="Springer Nature Switzerland",
    address="Cham",
    pages="136--147",
    isbn="978-3-031-47966-3"
}

About

This projects main goal is to have a simple library to project information into faces while retaining the facial structure.

Resources

License

Stars

Watchers

Forks

Packages

No packages published