Virtual human

Last updated

A virtual crash test dummy Virtual-crashtest.png
A virtual crash test dummy

A virtual human (or digital human) [1] is a software fictional character or human being. Virtual human have been created as tools and artificial companions in simulation, video games, film production, human factors and ergonomic and usability studies in various industries (aerospace, automobile, machinery, furniture etc.), clothing industry, telecommunications (avatars), medicine, etc. These applications require domain-dependent simulation fidelity. A medical application might require an exact simulation of specific internal organs; film industry requires highest aesthetic standards, natural movements, and facial expressions; ergonomic studies require faithful body proportions for a particular population segment and realistic locomotion with constraints, etc.

Contents

Game engines such as Unreal Engine via metahuman [2] and Unity by acquiring Wētā FX [3] have enabled real-time interactions with digital humans using physically based rendering.

Research

We see the virtual human as more than a useful artifact. We see it as a tool for understanding ourselves. If we can simulate a virtual human in a virtual world behaving in ways that are indistinguishable from a real human, then we assert that we have captured something about what it means to be human..

Perceiving Systems, Max Planck Institute for Intelligent Systems

Research on virtual humans involves interdisciplinary collaboration of activities such as machine learning, game development, and artificial neuroscience.

Motion capture Estudio de Motion Capture en Funcom.jpg
Motion capture

Types

There are two main classes of virtual human:[ according to whom? ]

A particular case of Virtual Human is the Virtual Actor, which is a Virtual Human (avatar or autonomous) representing an existing personality and acting in a film or a series.

History

Early models

Ergonomic analysis provided some of the earliest applications in computer graphics for modeling a human figure and its motion. William Fetter, a Boeing art director in early 20th Century, was the first person to draw a human figure using a computer. This figure is known as the "Boeing Man." The seven jointed "First Man", used for studying the instrument panel of a Boeing 747, enabled many pilot motions to be displayed by articulating the figure's pelvis, neck, shoulders, and elbows. The addition of twelve extra joints to "First Man" produced "Second Man". This figure was used to generate a set of animation film sequences based on a series of photographs produced by Eadweard Muybridge.

Then several models were developed by various companies: Cyberman (Cybernetic man-model) was developed by Chrysler Corporation for modeling human activity in and around a car. [13] It is based on 15 joints; the position of the observer is predefined.  Combiman (Computerized biomechanical man-model) was specifically designed to test how easily a human can reach objects in a cockpit; [14] it is defined using a 35 internal-link skeletal system. Boeman was designed in 1969 by Boeing Corporation. [15] It is based on a 50th-percentile three-dimensional human model. He can reach for objects like baskets, collisions are detected, and visual interferences are identified. Boeman is built as a 23-joint figure with variable link lengths. Sammie (System for Aiding Man Machine Interaction Evaluation) was designed in 1972 at the University of Nottingham for general ergonometric design and analysis. [16] This was, so far, the best parameterized human model and it presents a choice of physical types: slim, fat, muscled, etc. The vision system was very developed and complex objects have been manipulated by Sammie, based on 21 rigid links with 17 joints. Another interesting Virtual Human, Buford was developed at Rockwell International to find reach and clearance areas around a model positioned by the operator. [17] The figure represented a 50th-percentile human model and was covered by CAD-generated polygons. Buford is composed of 15 independent links that must be redefined at each modification.

In facial modelling, Parke produced a representation of the head and face at the University of Utah, and three years later, he proposed parametric models to produce a more realistic face. [18]

Some researchers have also used  elementary volumes to create virtual human models e.g. cylinders by Poter and Willmert [19] or ellipsoids by Herbison-Evans. [20] Badler and Smoliar [21] proposed Bubbleman as a three-dimensional human figure consisting of a number of spheres or bubbles. The model was based on overlap of spheres, and  the intensity and size of the spheres varied depending on the distance from the observer.

In the early 1980s, Tom Calvert, a professor of kinesiology and computer science at Simon Fraser University, attached potentiometers to a body and used the output to drive computer-animated figures for choreographic studies and clinical assessment of movement abnormalities. Calvert's animation system used the motion capture apparatus together with Labanotation and kinematic specifications to fully specify character motion. [22]

In the same time,  the Jack software package was developed at the Center for Human Modeling and Simulation at the University of Pennsylvania, and was made commercially available from Tecnomatix,  Jack provided a 3D interactive environment for controlling articulated figures. It featured a detailed human model and included realistic behavioral controls, anthropometric scaling, task animation and evaluation systems, view analysis, automatic reach and grasp, collision detection and avoidance, and many other useful tools for a wide range of applications. "

Production of films and demos

In the beginning of the Eighties, several companies and research groups produced short films and demos involving Virtual Humans. In particular, Information International Inc, commonly called Triple-I or III showed the potential for computer graphics to do amazing things, by producing a 3D scan of Peter Fonda's head, and the ultimate demo, “Adam Powers, the Juggler".

In 1982, Philippe Bergeron, Nadia Magnenat-Thalmann and Daniel Thalmann produced Dream Flight, a film depicting a person (articulated stick figure) transported over the Atlantic Ocean from Paris to New York. The film was completely programmed using the MIRA graphical language, an extension of the Pascal language based on graphical abstract data types. The film got several awards and was shown at the SIGGRAPH ‘83 Film Show. Another film became a breakthrough in 1985, the film "Tony de Peltrie" that used for the first time facial animation techniques to tell a story. During the same year, the Hard Woman video for the Mick Jagger's song was developed by Digital Productions that showed a nice animation of a stylized woman. In the same time, "The Making Of Brilliance" was created by Robert Abel & Associates as a TV commercial and has showed an incredible motion and rendering for the time being. In l987, the Engineering Institute of Canada celebrated its 100th anniversary. A major event, sponsored by Bell Canada and Northern Telecom, took place at the Place des Arts in Montreal. For this event, Nadia Magnenat-Thalmann and Daniel Thalmann simulated Marilyn Monroe and Humphrey Bogart meeting in a cafe in the old town  of Montreal. This film Rendez-vous in Montreal was the first film that has modelled 3D legendary stars. The film is a result of an extensive research on the 3D cloning aspect of real humans as well as the modelling of their behaviour. [23]  

In 1988 "Tin Toy" was the first film made by computer to obtain an Oscar (as Best Animated Short Film). It is the story of a tin one-man band toy, attempting to escape from Billy, a silly infant. The same year, deGraf/Wahrman developed "Mike the Talking Head" for Silicon Graphics to demonstrate the real-time capabilities of their new 4D machines. Mike was driven by a specially built controller that allowed a single puppeteer to handle many parameters of the character's face, including mouth, eyes, expression, and head position. The Silicon Graphics hardware provided real-time interpolation between facial expressions and head geometry as controlled by the performer. Mike was performed live in that year's SIGGRAPH film and video show.

In 1989, Kleiser-Walczak produced Dozo, a computer animation of a woman dancing in front of a microphone while singing a song for a music video. They captured the motion using an optically-based solution from Motion Analysis with multiple cameras to triangulate the images of small pieces of reflective tape placed on the body. The resulting output is the 3-D trajectory of each reflector in the space.

In 1989, in the film "The Abyss", a particular sequence shows a watery pseudopod acquiring a human face. This represented an important step for future synthetic characters as it was then possible to transform one shape to another human face. In 1989, Lotta Desire, actress of "The Little Death" and "Virtually Yours" demonstrated advanced facial animation and first computer-animated kiss. Then, "Terminator II" movie marked in 1991 a milestone in the animation of virtual humans mixed with real people and decors.

In the nineties, several short movies were produced, the most well-known is “Geri's Game” from Pixar which received the Academy Award for Animated Short films.

More recent research

Behavioral animation was introduced and developed by Craig Reynolds. [24] He had simulated flocks of birds alongside schools of fish for the purpose of studying group intuition and movement. By integrating numerous virtual humans to inhabit virtual worlds, Musse and Thalmann then initiated the field of crowd simulation.

Starting in the nineties, researchers have shifted to real-time animation and to the interaction with virtual worlds. The merge of Virtual Reality, Human Animation and Video Analysis techniques has led to the integration of Virtual Humans in Virtual Reality, the interaction with these virtual humans, and the self-representation as a clone or avatar or participant in the Virtual World. Interaction with Virtual Environments was planned to be at various level of user configuration. A high-end configuration could involve an immersive environment where users would interact by voice, gesture and physiological signals with virtual humans that would help them explore their digital data environment, both locally and over the Web. For this, Virtual Humans started to be able to recognize gestures, speech and expressions of the user and answer by speech and animation. [25] The ultimate objective of this development is to create  realistic and believable virtual humans with adaptation, perception and memory. These virtual humans paved the way of today research to produce virtual humans that can act freely while simulating emotions. [26] Ideally, the goal is to have them aware of the environment and unpredictable.

Applications

Related Research Articles

<span class="mw-page-title-main">Computer animation</span> Art of creating moving images using computers

Computer animation is the process used for digitally generating animations. The more general term computer-generated imagery (CGI) encompasses both static scenes and dynamic images, while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics. The animation's target is sometimes the computer itself, while other times it is film.

<span class="mw-page-title-main">Simulation</span> Imitation of the operation of a real-world process or system over time

A simulation is an imitative representation of a process or system that could exist in the real world. In this broad sense, simulation can often be used interchangeably with model. Sometimes a clear distinction between the two terms is made, in which simulations require the use of models; the model represents the key characteristics or behaviors of the selected system or process, whereas the simulation represents the evolution of the model over time. Another way to distinguish between the terms is to define simulation as experimentation with the help of a model. This definition includes time-independent simulations. Often, computers are used to execute the simulation.

<span class="mw-page-title-main">Motion capture</span> Process of recording the movement of objects or people

Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robots. In filmmaking and video game development, it refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.

<span class="mw-page-title-main">Crowd simulation</span> Model of movement

Crowd simulation is the process of simulating the movement of a large number of entities or characters. It is commonly used to create virtual scenes for visual media like films and video games, and is also used in crisis training, architecture and urban planning, and evacuation simulation.

<span class="mw-page-title-main">Skeletal animation</span> Computer animation technique

Skeletal animation or rigging is a technique in computer animation in which a character is represented in two parts: a surface representation used to draw the character and a hierarchical set of interconnected parts, a virtual armature used to animate the mesh. While this technique is often used to animate humans and other organic figures, it only serves to make the animation process more intuitive, and the same technique can be used to control the deformation of any object—such as a door, a spoon, a building, or a galaxy. When the animated object is more general than, for example, a humanoid character, the set of "bones" may not be hierarchical or interconnected, but simply represent a higher-level description of the motion of the part of mesh it is influencing.

Computer facial animation is primarily an area of computer graphics that encapsulates methods and techniques for generating and animating images or models of a character face. The character can be a human, a humanoid, an animal, a legendary creature or character, etc. Due to its subject and output type, it is also related to many other scientific and artistic fields from psychology to traditional animation. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific, technological, and artistic interests in computer facial animation.

Digital puppetry is the manipulation and performance of digitally animated 2D or 3D figures and objects in a virtual environment that are rendered in real-time by computers. It is most commonly used in filmmaking and television production but has also been used in interactive theme park attractions and live theatre.

The Jack human simulation system was developed at the Center for Human Modeling and Simulation at the University of Pennsylvania in the 1980s & 1990s under the direction of Professor Norman Badler. Conceived as an ergonomic assessment and virtual human prototyping system for NASA space shuttle development, it soon gathered funding from the U.S. Navy and U.S. Army for dismounted soldier simulation, from the U.S. Air Force for maintenance simulation, and from various other government and corporate users for their own applications. In 1996 the software was spun off into a privately held company and is now sold as an ergonomic human simulation toolkit by Siemens. The research and development of the Jack system have led to such standards as H-anim and MPEG4 Body Animation Parameters.

<span class="mw-page-title-main">3D computer graphics</span> Graphics that use a three-dimensional representation of geometric data

3D computer graphics, sometimes called CGI, 3-D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later or displayed in real time.

A virtual human, virtual persona, or digital clone is the creation or re-creation of a human being in image and voice using computer-generated imagery and sound, that is often indistinguishable from the real actor.

Gregory Peter Panos is an American writer, futurist, educator, strategic planning consultant, conference / event producer, and technology evangelist in augmented reality, virtual reality, human simulation, motion capture, performance animation, 3D character animation, human-computer interaction, and user experience design.

The history of computer animation began as early as the 1940s and 1950s, when people began to experiment with computer graphics – most notably by John Whitney. It was only by the early 1960s when digital computers had become widely established, that new avenues for innovative computer graphics blossomed. Initially, uses were mainly for scientific, engineering and other research purposes, but artistic experimentation began to make its appearance by the mid-1960s – most notably by Dr. Thomas Calvert. By the mid-1970s, many such efforts were beginning to enter into public media. Much computer graphics at this time involved 2-D imagery, though increasingly as computer power improved, efforts to achieve 3-D realism became the emphasis. By the late 1980s, photo-realistic 3-D was beginning to appear in film movies, and by mid-1990s had developed to the point where 3-D animation could be used for entire feature film production.

<span class="mw-page-title-main">3D modeling</span> Form of computer-aided engineering

In 3D computer graphics, 3D modeling is the process of developing a mathematical coordinate-based representation of a surface of an object in three dimensions via specialized software by manipulating edges, vertices, and polygons in a simulated 3D space.

<span class="mw-page-title-main">Computer-generated imagery</span> Application of computer graphics to create or contribute to images

Computer-generated imagery (CGI) is a specific-technology or application of computer graphics for creating or improving images in art, printed media, simulators, videos and video games. These images are either static or dynamic. CGI both refers to 2D computer graphics and 3D computer graphics with the purpose of designing characters, virtual worlds, or scenes and special effects. The application of CGI for creating/improving animations is called computer animation, or CGI animation.

<span class="mw-page-title-main">Nadia Magnenat Thalmann</span> Computer scientist

Nadia Magnenat Thalmann is a computer graphics scientist and robotician and is the founder and head of MIRALab at the University of Geneva. She has chaired the Institute for Media Innovation at Nanyang Technological University (NTU), Singapore from 2009 to 2021.

<span class="mw-page-title-main">Daniel Thalmann</span>

Prof. Daniel Thalmann is a Swiss and Canadian computer scientist and a pioneer in Virtual humans. He is currently Honorary Professor at EPFL, Switzerland and Director of Research Development at MIRALab Sarl in Geneva, Switzerland.

Rendez-vous in Montreal is a 1987 animated film that used advanced computer techniques to achieve such effects as modelling the film stars Marilyn Monroe and Humphrey Bogart. The film was directed by Nadia Magnenat Thalmann and Daniel Thalmann and produced with a team of 10 people. Specific interactive software [1] was developed that allowed designers to interactively use commands to generate the sequences. The main purpose of Rendez-vous in Montreal was to show that true synthetic actors can be created. This film represented a technological breakthrough both on the software side and the film itself.

<span class="mw-page-title-main">Nadine Social Robot</span> Social Humanoid Robot

Nadine is a gynoid humanoid social robot that is modelled on Professor Nadia Magnenat Thalmann. The robot has a strong human-likeness with a natural-looking skin and hair and realistic hands. Nadine is a socially intelligent robot which returns a greeting, makes eye contact, and can remember all the conversations had with it. It is able to answer questions autonomously in several languages, simulate emotions both in gestures and facially, depending on the content of the interaction with the user. Nadine can recognise persons it has previously seen, and engage in flowing conversation. Nadine has been programmed with a "personality", in that its demeanour can change according to what is said to it. Nadine has a total of 27 degrees of freedom for facial expressions and upper body movements. With persons it has previously encountered, it remembers facts and events related to each person. It can assist people with special needs by reading stories, showing images, put on Skype sessions, send emails, and communicate with other members of the family. It can play the role of a receptionist in an office or be dedicated to be a personal coach.

Live2D is an animation software program that can be used to generate real-time 2D animations—usually anime-style characters—using layered, continuous parts based on an illustration, without the need of frame-by-frame animation or a 3D model. This enables characters to move using 2.5D movement while maintaining the original illustration.

References

  1. Magnenat-Thalmann, Nadia; Thalmann, Daniel (November 24, 2005). "Virtual humans: thirty years of research, what next?". The Visual Computer. 21 (12): 997–1015. doi:10.1007/s00371-005-0363-6. ISSN   0178-2789. S2CID   10935963.
  2. "MetaHuman | Realistic Person Creator". Archived from the original on August 23, 2023. Retrieved August 23, 2023.
  3. "Artistry Tools | Unity". Archived from the original on August 26, 2023. Retrieved August 23, 2023.
  4. Zhou, Yi; Hu, Liwen; Xing, Jun; Chen, Weikai; Kung, Han-Wei; Tong, Xin; Li, Hao (2018). "HairNet: Single-View Hair Reconstruction using Convolutional Neural Networks". arXiv: 1806.07467 [cs.GR].
  5. "Realtime Vulkan Hair". GitHub . Archived from the original on August 23, 2023. Retrieved August 23, 2023.
  6. "Compute Shader - Vulkan Tutorial". Archived from the original on June 7, 2023. Retrieved August 23, 2023.
  7. "Vulkan® 1.3.275 - A Specification (With all ratified extensions)". Archived from the original on August 23, 2023. Retrieved August 23, 2023.
  8. "Animation Blueprints". Archived from the original on August 23, 2023. Retrieved August 23, 2023.
  9. "Phase-Functioned Neural Networks for Character Control". Archived from the original on August 5, 2023. Retrieved August 23, 2023.
  10. "Data-Driven Physics for Human Soft Tissue Animation". ps.is.mpg.de. Archived from the original on June 27, 2023. Retrieved September 10, 2023.
  11. "Behavioral Animation". www.red3d.com. Archived from the original on May 14, 2021. Retrieved July 5, 2021.
  12. "GENEA Challenge 2022: Co-Speech Gesture Generation". November 4, 2022.
  13. Dana Waterman and Clinton T. Washburn (1978) CYBERMAN — A Human Factors Design Tool Archived November 1, 2020, at the Wayback Machine , SAE Transactions, Vol. 87, Section 2: 780230–780458 (1978), pp. 1295-1306
  14. Evans SM (1976) User's Guide for the Program of Combiman Archived July 9, 2021, at the Wayback Machine , Report AMRLTR-76-117, University of Dayton, Ohio
  15. Dooley M (1982) Anthropometric Modeling Programs – A Survey Archived July 9, 2021, at the Wayback Machine , IEEE Computer Graphics and Applications, IEEE Computer Society, vol 2( 9), pp.17-25
  16. Bonney, M., Case, K., Hughes, B., Kennedy, D. et al., Using SAMMIE for Computer-Aided Workplace and Work Task Design Archived July 9, 2021, at the Wayback Machine , SAE Technical Paper 740270, 1974
  17. W. A. Fetter. A progression of human figures simulated by computergraphics Archived July 9, 2021, at the Wayback Machine .IEEE Comput. Graph. Appl., 2(9):9–13, 1982
  18. Parke FI (1972) Computer Generated Animation of Faces Archived July 9, 2021, at the Wayback Machine . Proc. ACM annual conference
  19. Poter TE, Willmert KD (1975) Three-Dimensional Human Display Model Archived July 9, 2021, at the Wayback Machine , Computer Graphics, Vol.9, No1, pp.102-110.
  20. Herbison-Evans D (1986) Animation of the Human Figure, Technical Report CS-86-50, University of Waterloo Computer Science Department, November.
  21. Badler NI, Smoliar SW (1979) Digital Representations of Human Movement Archived July 9, 2021, at the Wayback Machine , Computing Surveys, Vol.11, No.1, pp.19-38.
  22. Calvert TW, A. Patla A (1982) Aspects of the Kinematic Simulation of Human Movement Archived July 9, 2021, at the Wayback Machine , IEEE Computer Graphics and Applications, Vol.2, No.9, pp.41-50.
  23. N. Magnenat-Thalmann, D. Thalmann, The Direction of Synthetic Actors in the Film Rendez-vous in Montreal Archived June 24, 2021, at the Wayback Machine , IEEE Computer Graphics and Applications, Vol.7, No 12, 1987, pp.9-19
  24. C. Reynolds (1987). Flocks, herds and schools: A distributed behavioral model Archived July 3, 2021, at the Wayback Machine . Proceedings of ACM SIGGRAPH 87. July 1987. pp. 25–34.
  25. Thomas, Daniel J. (August 2021). "Artificially intelligent virtual humans for improving the outcome of complex surgery". International Journal of Surgery (London, England). 92: 106022. doi: 10.1016/j.ijsu.2021.106022 . ISSN   1743-9159. PMID   34265470. S2CID   235960454.
  26. Loveys, Kate; Sagar, Mark; Broadbent, Elizabeth (July 22, 2020). "The Effect of Multimodal Emotional Expression on Responses to a Digital Human during a Self-Disclosure Conversation: a Computational Analysis of User Language". Journal of Medical Systems. 44 (9). doi:10.1007/s10916-020-01624-4. ISSN   0148-5598. S2CID   220717084.
  27. Allen, A. and Jones, C., How virtual workers are feeding school children Archived February 8, 2023, at the Wayback Machine , Supply Management, July–September 2022, accessed 8 February 2023
  28. Kim, EA., D. Kim, Z. E, and H. Shoenberger, The next hype in social media advertising: Examining virtual influencers’ brand endorsement effectiveness Archived February 27, 2023, at the Wayback Machine . Frontiers in Psychology, 2023. 14:1089051.

Further reading

Books about virtual humans

Books with some contents of virtual humans