This article contains content that is written like an advertisement .(April 2018) |
Formerly | Image Metrics |
---|---|
Industry | Entertainment, software development |
Founded | 2012 |
Products | Facial Animation and Motion Capture (MoCap) Technology |
Website | facewaretech |
Faceware Technologies is a facial animation and motion capture development company in America. [1] [2] The company was established under Image Metrics and became its own company at the beginning of 2012. [3] [4]
Faceware produces software used to capture an actor's performance and transfer it onto an animated character, as well as the hardware needed to capture the performances. [5] The software line includes Faceware Analyzer, Faceware Retargeter, and Faceware Live. [6] [7]
Faceware software is used by film studios and video game developers including Rockstar Games, Bungie, Cloud Imperium Games, and 2K in games such as Grand Theft Auto V, Destiny, Star Citizen, and Halo: Reach. [6] [8] [9] [10] [11] [12]
Through its application in the video game industry, Faceware won the Develop Award while it was still part of Image Metrics for Technical Innovation in 2008. [13] It won the Develop Award again for Creative Contribution: Visuals in 2014. [14] Faceware received Best of Show recognition at the Game Developers Conference 2011 in San Francisco [15] as well as Computer Graphics World's Silver Edge Award at SIGGRAPH 2014 [16] and 2016. [17] Faceware won the XDS Gary Award in 2016 for its contributions to the Faceware-EA presentation at the 2016 XDS Summit.
Image Metrics, founded in 2000, is a provider of facial animation and motion capture technology within the video game and entertainment industries. [1] [3] In 2008, Image Metrics offered a beta version of its facial animation technology to visual effects and film studios. The technology captured an actor's performance on video, analyzed it, and mapped it onto a CG model. [1] [2] The release of the beta allowed studios to incorporate facial animation technology into internal pipelines rather than going to the Image Metrics studio as they had in the past. [2] [18] The first studio to beta test Image Metric's software in 2009 was the visual effects studio Double Negative out of London. [18]
In 2010, Image Metrics launched the facial animation technology platform Faceware. Faceware focused on increasing creative control, efficiency and production speed for animators. The software could be integrated into any pipeline or used with any game engine. Image Metrics provided training to learn the Faceware platform. The first studio to sign on as a Faceware customer was Bungie, which incorporated the software into its in-house production. [5] Image Metrics acquired FacePro in 2010, a company that provided automated lip synchronization which could be altered for accurate results, and Image Metrics integrated the acquired technology into its facial animation software. [19] [20] Also in 2010, Image Metrics bought Character-FX, a character animation company. [8] [21] [22] [23] Character-FX produced tools for use in Autodesk’s Maya and 3DS Max which aid in the creation of character facial rigs using an automated weighting transfer system that rapidly shifts facial features on a character to create lifelike movement. [4] [8] [23]
Image Metrics raised $8 million in funding and went public through a reverse merger in 2010 with International Cellular Industries. Image Metrics became wholly owned by International Cellular industries, which changed its name and took on facial animation technology as its sole line of business. [6] [24] [25] Faceware 3.0 was announced in March 2011. The upgrade included auto-pose, a shared pose database, and curve refinement. [26] Image Metrics led a workshop and presentation about Faceware 3.0 at the CTN Animation Expo 2011 titled "Faceware: Creating an Immersive Experience through Facial Animation." [27] Faceware's technology was displayed at Edinburgh Interactive in August 2011 to show its ability to add player facial animation from a webcam or Kinect sensor into a game in real-time. [28] [29]
Image Metrics sold the Faceware software to its spinoff company, Faceware Technologies, in January 2012. [3] [4] Following the spinoff, Faceware Technologies focused on producing and distributing its technology to professional animators. [3] The technology was tested through Universities, including the University of Portsmouth. [30]
Faceware launched its 3D facial animation tools, software packages Faceware Analyzer, and Faceware Retargeter with the Head-Mounted Camera System (HMCS). Analyzer tracks and processes live footage of an actor and Retargeter transfers that movement onto the face of a computer-generated character. The Head-Mounted Camera System is not required to use the software. Six actors can be captured simultaneously. [31]
Faceware Live was shown for the first time at SIGGRAPH 2013. [32] It was created to enable the real-time capture and retargeting of facial movements. The live capture of facial performance can use any video source to track and translate facial expressions into a set of animation values and transfer the captured data onto a 3D animated character in real time. [7] In 2014, Faceware released Faceware Live 2.0. The update included the option to stream multiple characters simultaneously, instant calibration, improved facial tracking, consistent calibration, and support for high-frame-rate cameras. [33]
In 2015, Faceware launched a plugin for Unreal Engine 4 called Faceware Live. The company co-developed the plugin with Australia-based Opaque Multimedia. [34] It makes motion capture of expressions and other facial movements possible with any video camera through Faceware's markerless 3D facial motion capture software. [35]
In 2016, Faceware announced the launch of Faceware Interactive, which is focused on the development of software and hardware that can be used in the creation of digital characters with whom real people can interact with. [36]
Faceware Technologies partnered with Binari Sonori in 2014 to develop a video-based localization service. [37] [38] Also in 2014, Faceware Technologies entered a global partnership with Vicon, a company focused on motion capture. The partnership would focus on developing new technology to expand into full-body motion capture data. The first step of the integration was to make the Faceware software compatible with Vicon's head rig, Cara, to allow data acquired from Cara to be processed and transferred into Faceware products. [39] [40]
Faceware Technologies has two main aspects of facial animation software. [6]
Faceware Analyzer is a stand-alone single-camera facial tracking software that converts videos of facial motion into files that can be used for Faceware Retargeter. The Lite version of the software can automatically track facial movements which can be applied to 3D models with Faceware Retargeter. The Pro version can perform shot specific custom calibrations, import and export actor data, auto indicate tracking regions, and has server and local licensing options. The data captured by Faceware Analyzer is then processed in Faceware Retargeter. [6]
Faceware Retargeter 4.0 was announced in 2014. Faceware Retargeter uses facial tracking data created in Analyzer to create facial animation in a pose-based workflow. The upgrade has a plug-in for Autodesk animation tools, advanced character expression sets, visual tracking data, shared pose thumbnails, and batch processing. The Lite version of the Retargeter software transfers actor's performances onto animated characters and reduces and smooths key frames. The Pro version includes custom poses, intelligent pose suggestions, shared pose libraries, and the ability to backup and restore jobs. [6]
Faceware Live aims to create natural looking faces and facial expressions in real-time. Any video source can be used with the software's one-button calibration. The captured video is transferred onto a 3D animated character. This process combines image processing and data streaming to translate facial expressions into a set of animation values. [7] [33]
Faceware has hardware options that can be rented or purchased. Available hardware is the entry level GoPro Headcam Kit and the Professional Headcam System. [6] The Indie Facial Mo-cap package includes hardware, a camera and headmount, and the tools to use it. [41]
Faceware's software is used by companies such as Activision-Blizzard, Bethesda, Ubisoft, Electronic Arts, Sony, Cloud Imperium Games, and Microsoft. Rockstar Games used the software in games such as Grand Theft Auto V and Red Dead Redemption and Bungie used Faceware in games including Destiny and Halo: Reach. Faceware has also been used in other games like XCOM2, Dying Light: The Following, Hitman, EA Sports UFC 2, Fragments for Microsoft's HoloLens, DOOM, Mirror's Edge Catalyst, Kingsglaive, F1 2016, ReCore, Destiny: Rise of Iron, Mafia III, Call of Duty Infinite Warfare, Killzone:Shadow Fall, NBA 2K10-2K17, Sleeping Dogs, Crysis 2 and 3, Star Citizen, and in movies like The Curious Case of Benjamin Button and Robert Zemeckis's The Walk. [6] [8] [9] [10] [31] [42] [43] [44]
Computer animation is the process used for digitally generating moving images. The more general term computer-generated imagery (CGI) encompasses both still images and moving images, while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics.
Visual effects is the process by which imagery is created or manipulated outside the context of a live-action shot in filmmaking and video production. The integration of live-action footage and other live-action footage or CGI elements to create realistic imagery is called VFX.
Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robots. In filmmaking and video game development, it refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.
A character generator, often abbreviated as CG, is a device or software that produces static or animated text for keying into a video stream. Modern character generators are computer-based, and they can generate graphics as well as text.
Computer facial animation is primarily an area of computer graphics that encapsulates methods and techniques for generating and animating images or models of a character face. The character can be a human, a humanoid, an animal, a legendary creature or character, etc. Due to its subject and output type, it is also related to many other scientific and artistic fields from psychology to traditional animation. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific, technological, and artistic interests in computer facial animation.
Virtual cinematography is the set of cinematographic techniques performed in a computer graphics environment. It includes a wide variety of subjects like photographing real objects, often with stereo or multi-camera setup, for the purpose of recreating them as three-dimensional objects and algorithms for the automated creation of real and simulated camera angles. Virtual cinematography can be used to shoot scenes from otherwise impossible camera angles, create the photography of animated films, and manipulate the appearance of computer-generated effects.
Facial motion capture is the process of electronically converting the movements of a person's face into a digital database using cameras or laser scanners. This database may then be used to produce computer graphics (CG), computer animation for movies, games, or real-time avatars. Because the motion of CG characters is derived from the movements of real people, it results in a more realistic and nuanced computer character animation than if the animation were created manually.
Digital puppetry is the manipulation and performance of digitally animated 2D or 3D figures and objects in a virtual environment that are rendered in real-time by computers. It is most commonly used in filmmaking and television production but has also been used in interactive theme park attractions and live theatre.
Human image synthesis is technology that can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work .
The PlayStation Eye is a digital camera device, similar to a webcam, for the PlayStation 3. The technology uses computer vision and gesture recognition to process images taken by the camera. This allows players to interact with games using motion and color detection as well as sound through its built-in microphone array. It is the successor to the EyeToy for the PlayStation 2, which was released in 2003.
Image Metrics is a 3D facial animation and Virtual Try-on company headquartered in El Segundo, with offices in Las Vegas, and research facilities in Manchester. Image Metrics are the makers of the Live Driver and Portable You SDKs for software developers and are providers of facial animation software and services to the visual effects industries.
iClone is a real-time 3D animation and rendering software program. Real-time playback is enabled by using a 3D videogame engine for instant on-screen rendering.
The history of computer animation began as early as the 1940s and 1950s, when people began to experiment with computer graphics – most notably by John Whitney. It was only by the early 1960s when digital computers had become widely established, that new avenues for innovative computer graphics blossomed. Initially, uses were mainly for scientific, engineering and other research purposes, but artistic experimentation began to make its appearance by the mid-1960s – most notably by Dr. Thomas Calvert. By the mid-1970s, many such efforts were beginning to enter into public media. Much computer graphics at this time involved 2-D imagery, though increasingly as computer power improved, efforts to achieve 3-D realism became the emphasis. By the late 1980s, photo-realistic 3-D was beginning to appear in film movies, and by mid-1990s had developed to the point where 3-D animation could be used for entire feature film production.
A light stage is an active illumination system used for shape, texture, reflectance and motion capture often with structured light and a multi-camera setup.
Adobe Fuse CC was a 3D computer graphics software developed by Mixamo that enables users to create 3D characters. Its main novelty is the ability to import and integrate user-generated content into the character creator. Fuse was part of Mixamo's product suite, and it is aimed at video game developers, video game modders, and 3D enthusiasts.
Hao Li is a computer scientist, innovator, and entrepreneur from Germany, working in the fields of computer graphics and computer vision. He is co-founder and CEO of Pinscreen, Inc, as well as associate professor of computer vision at the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI). He was previously a Distinguished Fellow at the University of California, Berkeley, an associate professor of computer science at the University of Southern California, and former director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. He was also a visiting professor at Weta Digital and a research lead at Industrial Light & Magic / Lucasfilm.
Tony de Peltrie is a Canadian animated short film from 1985. The short shows the first animated human character to express emotion through facial expressions and body movements, which touched the feelings of the audience. The film was produced from 1982 to 1985 at the French-speaking University of Montreal, Quebec, and Canada.
Visage SDK is a multi-platform software development kit (SDK) created by Visage Technologies AB. Visage SDK allows software programmers to build facial motion capture and eye tracking applications.
Live2D is an animation technique used to animate static images—usually anime-style characters—that involves separating an image into parts and animating each part accordingly, without the need of frame-by-frame animation or a 3D model. This enables characters to move using 2.5D movement while maintaining the original illustration.