Visage SDK

Last updated
Visage SDK
Developer(s) Visage Technologies AB
Stable release
8.2b2.5242 / 6 April 2017
Platform
Type Software development kit
Website www.visagetechnologies.com

Visage SDK (distributed as visage SDK) is a multi-platform software development kit (SDK) created by Visage Technologies AB. Visage SDK allows software programmers to build facial motion capture and eye tracking applications.

Contents

Technologies

Example of Visage SDK (Visage Technologies' main product) face tracking and analysis (gender, age and emotion recognition) Visage Technologies Face Tracking and Analysis.png
Example of Visage SDK (Visage Technologies' main product) face tracking and analysis (gender, age and emotion recognition)

Face Track [1]

Face Track tracks 3D head poses, facial features, and eyes/gaze for multiple faces in a camera stream or from a video file. Face Track has configurable packages that include: facial tracking, face and facial landmarks/features detection, head tracking, and eye tracking.

Face Analysis [2]

Face Analysis includes machine learning algorithms to determine gender, emotions and age. Face Analysis is compatible with Face Track to find/track faces in images or video, and determine the gender, emotions and age for a specified face.

Face Recognition [3]

Face Recognition is used to identify or verify a person from a digital image or a video source using a pre-stored facial data. Visage SDK's face recognition algorithms can measure similarities between people and recognize a person’s identity[ citation needed ] from a frontal facial image by comparing it to pre-stored faces.

History and application

The development of Visage SDK began in 2002 when Visage Technologies AB was founded in Linköping, Sweden. The founders were among the contributors to the MPEG-4 Face and Body Animation International Standard. [4] [5]

Visage SDK is used in various application fields, such as game development, arts and entertainment, marketing and retail, marketing research, automotive industry, industrial safety, assistive technologies, health care, biometrics, audio processing and robotics. Recently, visage SDK has been used to create solutions in virtual make-up and 3D face filtering.

Features

See also

Related Research Articles

Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science. While some core ideas in the field may be traced as far back as to early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing and her book Affective Computing published by MIT Press. One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.

Biometrics are body measurements and calculations related to human characteristics. Biometric authentication is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance.

<span class="mw-page-title-main">Facial recognition system</span> Technology capable of matching a face from an image against a database of faces

A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.

<span class="mw-page-title-main">Face detection</span> Identification of human faces in images

Face detection is a computer technology being used in a variety of applications that identifies human faces in digital images. Face detection also refers to the psychological process by which humans locate and attend to faces in a visual scene.

Facial motion capture is the process of electronically converting the movements of a person's face into a digital database using cameras or laser scanners. This database may then be used to produce computer graphics (CG), computer animation for movies, games, or real-time avatars. Because the motion of CG characters is derived from the movements of real people, it results in a more realistic and nuanced computer character animation than if the animation were created manually.

<span class="mw-page-title-main">PlayStation Eye</span> Digital camera device for the PlayStation 3

The PlayStation Eye is a digital camera device, similar to a webcam, for the PlayStation 3. The technology uses computer vision and gesture recognition to process images taken by the camera. This allows players to interact with games using motion and color detection as well as sound through its built-in microphone array. It is the successor to the EyeToy for the PlayStation 2, which was released in 2003.

<span class="mw-page-title-main">Multiple Biometric Grand Challenge</span> Biometric project

Multiple Biometric Grand Challenge (MBGC) is a biometric project. Its primary goal is to improve performance of face and iris recognition technology on both still and video imagery with a series of challenge problems and evaluation.

<span class="mw-page-title-main">Face Recognition Vendor Test</span>

The Face Recognition Vendor Test (FRVT) was a series of large scale independent evaluations for face recognition systems realized by the National Institute of Standards and Technology in 2000, 2002, 2006, 2010, 2013 and 2017. Previous evaluations in the series were the Face Recognition Technology (FERET) evaluations in 1994, 1995 and 1996. The project is now in an Ongoing status with periodic reports, and continues to grow in scope. It now includes tests for Face-in-Video-Evaluation (FIVE), facial morphing detection, and testing for demographic effects.

ISO/IEC 19794 Information technology—Biometric data interchange formats—Part 5: Face image data, or ISO/IEC 19794-5 for short, is the fifth of 8 parts of the ISO/IEC standard ISO/IEC 19794, published in 2005, which describes interchange formats for several types of biometric data. ISO/IEC 19794-5 defines specifically a standard scheme for codifying data describing human faces within a CBEFF-compliant data structure, for use in facial recognition systems. Modern biometric passport photos should comply with this standard. Many organizations and have already started enforcing its directives, and several software applications have been created to automatically test compliance to the specifications.

In order to identify a person, a security system has to compare personal characteristics with a database. A scan of a person's iris, fingerprint, face, or other distinguishing feature is created, and a series of biometric points are drawn at key locations in the scan. For example, in the case of a facial scan, biometric points might be placed at the tip of each ear lobe and in the corners of both eyes. Measurements taken between all the points of a scan are compiled and result in a numerical "score". This score is unique for every individual, but it can quickly and easily be compared to any compiled scores of the facial scans in the database to determine if there is a match.

FantaMorph is a morphing software for the creation of photo morphing pictures and sophisticated morph animation effects. The category of this software is Image Editor or Animation in Graphics or Multimedia. FantaMorph supports both Windows and Mac operating system, and comes in three different editions: Standard, Professional and Deluxe.

A Face Animation Parameter (FAP) is a component of the MPEG-4 Face and Body Animation (FBA) International Standard developed by the Moving Pictures Experts Group. It describes a standard for virtually representing humans and humanoids in a way that adequately achieves visual speech intelligibility as well as the mood and gesture of the speaker, and allows for very low bitrate compression and transmission of animation parameters. FAPs control key feature points on a face model mesh that are used to produce animated visemes and facial expressions, as well as head and eye movement. These feature points are part of the Face Definition Parameters (FDPs) also defined in the MPEG-4 standard.

<span class="mw-page-title-main">Biometric device</span> Identification and authentication device

A biometric device is a security identification and authentication device. Such devices use automated methods of verifying or recognising the identity of a living person based on a physiological or behavioral characteristic. These characteristics include fingerprints, facial images, iris and voice recognition.

Faceware Technologies is a facial animation and motion capture development company in America. The company was established under Image Metrics and became its own company at the beginning of 2012.

DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. The Facebook Research team has stated that the DeepFace method reaches an accuracy of 97.35% ± 0.25% on Labeled Faces in the Wild (LFW) data set where human beings have 97.53%. This means that DeepFace is sometimes more successful than human beings. As a result of growing societal concerns Meta announced that it plans to shut down Facebook facial recognition system, deleting the face scan data of more than one billion users. This change will represent one of the largest shifts in facial recognition usage in the technology’s history. Facebook planned to delete by December 2021 more than one billion facial recognition templates, which are digital scans of facial features. However, it did not plan to eliminate DeepFace which is the software that powers the facial recognition system. The company has also not ruled out incorporating facial recognition technology into future products, according to Meta spokesperson.

<span class="mw-page-title-main">Visage Technologies AB</span> Swedish computer vision software company

<span class="mw-page-title-main">Emotion recognition</span> Process of visually interpreting emotions

Emotion recognition is the process of identifying human emotion. People vary widely in their accuracy at recognizing the emotions of others. Use of technology to help people with emotion recognition is a relatively nascent research area. Generally, the technology works best if it uses multiple modalities in context. To date, the most work has been conducted on automating the recognition of facial expressions from video, spoken expressions from audio, written expressions from text, and physiology as measured by wearables.

Artificial empathy or computational empathy is the development of AI systems—such as companion robots or virtual agents—that can detect emotions and respond to them in an empathic way.

Amazon Rekognition is a cloud-based software as a service (SaaS) computer vision platform that was launched in 2016. It has been sold to, and used by a number of United States government agencies, including U.S. Immigration and Customs Enforcement (ICE) and Orlando, Florida police, as well as private entities.

Identity replacement technology is any technology that is used to cover up all or parts of a person's identity, either in real life or virtually. This can include face masks, face authentication technology, and deepfakes on the Internet that spread fake editing of videos and images. Face replacement and identity masking are used by either criminals or law-abiding citizens. Identity replacement tech, when operated on by criminals, leads to heists or robbery activities. Law-abiding citizens utilize identity replacement technology to prevent government or various entities from tracking private information such as locations, social connections, and daily behaviors.

References

  1. "Face tracking software". Visage Technologies. Retrieved 2023-03-29.
  2. "Face Analysis: Age, Gender & Emotion Recognition". Visage Technologies. Retrieved 2023-03-29.
  3. "Face Recognition Technology". Visage Technologies. Retrieved 2023-03-29.
  4. Pandžić, Igor and Robert Forchheimer (2002): "The origins of the MPEG-4 Facial Animation standard", in: MPEG-4 Facial Animation - The standard, implementations and applications (eds. Igor S. Pandžić and Robert Forchheimer). Chichester: John Wiley & Sons ( ISBN   0-470-84465-5).
  5. Pandžić, Igor and Robert Forchheimer (2002): "MPEG-4 Facial Animation Framework for the Web and Mobile Platforms", in: MPEG-4 Facial Animation - The standard, implementations and applications (eds. Igor S. Pandžić and Robert Forchheimer). Chichester: John Wiley & Sons ( ISBN   0-470-84465-5)
  6. Visage Technologies: Main features