Animation database

Last updated
A dancer's movements, captured via optical motion capture can be stored in an animation database, then analyzed and reused. MotionCapture.jpg
A dancer's movements, captured via optical motion capture can be stored in an animation database, then analyzed and reused.

An animation database is a database which stores fragments of animations or human movements and which can be accessed, analyzed and queried to develop and assemble new animations. [1] [2] Given that the manual generation of a large amount of animation can be time consuming and expensive, an animation database can assist users in building animations by using existing components, and sharing animation fragments. [2]

Early examples of animation databases include the system MOVE which used an object oriented database. [1] Modern animation databases can be populated via the extraction of skeletal animations from motion capture data. [3]

Other examples include crowd simulation in which a number of people are simulated as a crowd. Given that in some applications the people need to be walking at different speeds, say on a sidewalk, the animation database can be used to retrieve and merge different animated figures. [4] The method is mainly known as "motion graphs". [5]

Animation databases can also be used for "interactive storytelling" in which fragments of animations are retrieved from the animation database and are recycled to combine into new stories. For instance, the animation database called Animebase is used within the system Words Anime to help generate animations using recycled components. [2] In this approach, the user may input words which form parts of a story and queries against the database help select suitable animation fragments. This type of system may indeed use two databases: an animation database, as well as a story knowledge database. The story knowledge database may use subjects, predicates and objects to refer to story fragments. The system then assists the user in matching between story fragments and animation fragments. [2]

Animation databases can also be used for the generation of visual scenes using humanoid models. [6] An example application has been the development of an animated humanoid-based sign language system to help the disabled. [6]

Another application of an animation database is in the synthesis of idle motion for human characters. [7] Human beings move all the time and in unique ways, and the presentation of a consistent and realistic set of idle motions for each character between different animation segments has been a challenge, e.g. each person has a unique way of standing and this needs to be represented in a realistic way throughout an animation. One of the problems is that idle motion affects all joints and simply showing statistical movements at each joint results in less than realistic portrayals. One approach to solving this problem is to use an animation database with a large set of pre-recorded human movements, and obtain the suitable patterns of motion from the database through statistical analysis. [7]

Related Research Articles

<span class="mw-page-title-main">Computer animation</span> Art of creating moving images using computers

Computer animation is the process used for digitally generating animations. The more general term computer-generated imagery (CGI) encompasses both static scenes and dynamic images, while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics to generate a three-dimensional picture. The target of the animation is sometimes the computer itself, while other times it is film.

Visual effects is the process by which imagery is created or manipulated outside the context of a live-action shot in filmmaking and video production. The integration of live-action footage and other live-action footage or CGI elements to create realistic imagery is called VFX.

<span class="mw-page-title-main">Crowd simulation</span> Model of movement

Crowd simulation is the process of simulating the movement of a large number of entities or characters. It is commonly used to create virtual scenes for visual media like films and video games, and is also used in crisis training, architecture and urban planning, and evacuation simulation.

<span class="mw-page-title-main">Skeletal animation</span> Computer animation technique

Skeletal animation or rigging is a technique in computer animation in which a character is represented in two parts: a surface representation used to draw the character and a hierarchical set of interconnected parts, a virtual armature used to animate the mesh. While this technique is often used to animate humans and other organic figures, it only serves to make the animation process more intuitive, and the same technique can be used to control the deformation of any object—such as a door, a spoon, a building, or a galaxy. When the animated object is more general than, for example, a humanoid character, the set of "bones" may not be hierarchical or interconnected, but simply represent a higher-level description of the motion of the part of mesh it is influencing.

<span class="mw-page-title-main">Real-time computer graphics</span> Sub-field of computer graphics

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.

Computer facial animation is primarily an area of computer graphics that encapsulates methods and techniques for generating and animating images or models of a character face. The character can be a human, a humanoid, an animal, a legendary creature or character, etc. Due to its subject and output type, it is also related to many other scientific and artistic fields from psychology to traditional animation. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific, technological, and artistic interests in computer facial animation.

Facial motion capture is the process of electronically converting the movements of a person's face into a digital database using cameras or laser scanners. This database may then be used to produce computer graphics (CG), computer animation for movies, games, or real-time avatars. Because the motion of CG characters is derived from the movements of real people, it results in a more realistic and nuanced computer character animation than if the animation were created manually.

<span class="mw-page-title-main">Alice (software)</span>

Alice is an object-based educational programming language with an integrated development environment (IDE). Alice uses a drag and drop environment to create computer animations using 3D models. The software was developed first at University of Virginia in 1994, then Carnegie Mellon, by a research group led by Randy Pausch.

<span class="mw-page-title-main">Motion graphics</span> Digital footage or animation which create the illusion of motion or rotation

Motion graphics are pieces of animation or digital footage which create the illusion of motion or rotation, and are usually combined with audio for use in multimedia projects. Motion graphics are usually displayed via electronic media technology, but may also be displayed via manual powered technology. The term distinguishes static graphics from those with a transforming appearance over time, without over-specifying the form. While any form of experimental or abstract animation can be called motion graphics, the term typically more explicitly refers to the commercial application of animation and effects to video, film, TV, and interactive applications.

<span class="mw-page-title-main">Prefuse</span> Java-based toolkit

Prefuse is a Java-based toolkit for building interactive information visualization applications. It supports a rich set of features for data modeling, visualization and interaction. It provides optimized data structures for tables, graphs, and trees, a host of layout and visual encoding techniques, and support for animation, dynamic queries, integrated search, and database connectivity.

Interactive skeleton-driven simulation is a scientific computer simulation technique used to approximate realistic physical deformations of dynamic bodies in real-time. It involves using elastic dynamics and mathematical optimizations to decide the body-shapes during motion and interaction with forces. It has various applications within realistic simulations for medicine, 3D computer animation and virtual reality.

<span class="mw-page-title-main">Finger tracking</span> High-resolution technique in gesture recognition and image processing

In the field of gesture recognition and image processing, finger tracking is a high-resolution technique developed in 1969 that is employed to know the consecutive position of the fingers of the user and hence represent objects in 3D. In addition to that, the finger tracking technique is used as a tool of the computer, acting as an external device in our computer, similar to a keyboard and a mouse.

The NECA Project was a research project that focused on multimodal communication with animated agents in a virtual world. NECA was funded by the European Commission from 1998–2002 and the research results were published up to 2005.

The history of computer animation began as early as the 1940s and 1950s, when people began to experiment with computer graphics – most notably by John Whitney. It was only by the early 1960s when digital computers had become widely established, that new avenues for innovative computer graphics blossomed. Initially, uses were mainly for scientific, engineering and other research purposes, but artistic experimentation began to make its appearance by the mid-1960s – most notably by Dr Thomas Calvert. By the mid-1970s, many such efforts were beginning to enter into public media. Much computer graphics at this time involved 2-dimensional imagery, though increasingly as computer power improved, efforts to achieve 3-dimensional realism became the emphasis. By the late 1980s, photo-realistic 3D was beginning to appear in film movies, and by mid-1990s had developed to the point where 3D animation could be used for entire feature film production.

Scenario is an Artificial Intelligence (AI) computer graphic interactive installation, directed by the artist Dennis Del Favero, and developed in collaboration with scriptwriter Stephen Sewell, AI scientist Maurice Pagnucco working with computer scientists Anuraag Sridhar, Arcot Sowmya and Paul Compton. It is a 360-degree 3D cinematic work whose narrative is interactively produced by the audience and humanoid characters. The title is a Commedia dell'arte term referring to the way dramatic action is dependent on the way actors and audience interact. Scenario was developed at the iCinema Centre for Interactive Cinema Research.

<span class="mw-page-title-main">Computer-generated imagery</span> Application of computer graphics to create or contribute to images

Computer-Generated Imagery (CGI) is a specific technology or application of computer graphics for creating or improving images in art, printed media, simulators, videos and video games. These images are either static or dynamic. CGI both refers to 2D computer graphics and 3D computer graphics with the purpose of designing characters, virtual worlds, or scenes and special effects. The application of CGI for creating/improving animations is called computer animation, or CGI animation.

The following is provided as an overview of and topical guide to databases:

Physically based animation is an area of interest within computer graphics concerned with the simulation of physically plausible behaviors at interactive rates. Advances in physically based animation are often motivated by the need to include complex, physically inspired behaviors in video games, interactive simulations, and movies. Although off-line simulation methods exist to solve most all of the problems studied in physically-based animation, these methods are intended for applications that necessitate physical accuracy and slow, detailed computations. In contrast to methods common in offline simulation, techniques in physically based animation are concerned with physical plausibility, numerical stability, and visual appeal over physical accuracy. Physically based animation is often limited to loose approximations of physical behaviors because of the strict time constraints imposed by interactive applications. The target frame rate for interactive applications such as games and simulations is often 25-60 hertz, with only a small fraction of the time allotted to an individual frame remaining for physical simulation. Simplified models of physical behaviors are generally preferred if they are more efficient, easier to accelerate, or satisfy desirable mathematical properties. Fine details are not important when the overriding goal of a visualization is aesthetic appeal or the maintenance of player immersion since these details are often difficult for humans to notice or are otherwise impossible to distinguish at human scales.

<span class="mw-page-title-main">Nadine Social Robot</span> Social Humanoid Robot

Nadine is a gynoid humanoid social robot that is modelled on Professor Nadia Magnenat Thalmann. The robot has a strong human-likeness with a natural-looking skin and hair and realistic hands. Nadine is a socially intelligent robot which returns a greeting, makes eye contact, and can remember all the conversations had with it. It is able to answer questions autonomously in several languages, simulate emotions both in gestures and facially, depending on the content of the interaction with the user. Nadine can recognise persons it has previously seen, and engage in flowing conversation. Nadine has been programmed with a "personality", in that its demeanour can change according to what is said to it. Nadine has a total of 27 degrees of freedom for facial expressions and upper body movements. With persons it has previously encountered, it remembers facts and events related to each person. It can assist people with special needs by reading stories, showing images, put on Skype sessions, send emails, and communicate with other members of the family. It can play the role of a receptionist in an office or be dedicated to be a personal coach.

Virtual humans are simulations of human beings on computers. The research domain is concerned with their representation, movement and behavior. There is a wide range of applications: simulation, games, film and TV productions, human factors and ergonomic and usability studies in various industries, clothing industry, telecommunications (avatars), medicine, etc. These applications require different know-hows. A medical application might require an exact simulation of specific internal organs; film industry requires highest aesthetic standards, natural movements, and facial expressions; ergonomic studies require faithful body proportions for a particular population segment and realistic locomotion with constraints, etc. Studies also show that human-like appearance of virtual humans show higher message credibility than anime-like virtual humans in advertising context.

References

  1. 1 2 S. Kuroki, "Walkthrough using Animation database MOVE" in Database and expert systems applications, Volume 4 edited by Vladimír Marík, 1994 ISBN   3-540-57234-1 pages 760-763
  2. 1 2 3 4 Kaoru Sumi "Interactive Storytelling System Using Recycle-Based Story Knowledge" in Interactive Storytelling: Second Joint International Conference on Interactive Digital Storytelling, ICIDS 2009 by Ido A. Iurgel 2009 ISBN   3-642-10642-0 pages 74-85
  3. G. Rogez, "Exploiting Spatio-temporal constraints for Robust 2D Pose Tracking" in Human Motion: Understanding, Modeling, Capture and Animation: HumanMotion 2007, Rio de Janeiro, Brazil, October 20, 2007 ISBN pages58-72
  4. Crowd simulation by Daniel Thalmann, Soraia Raupp Musse 2007 ISBN   1-84628-824-X pages 59-64
  5. Motion Graphs by Michael Gleicher, 2008, Published in Proceeding SIGGRAPH '08 ACM SIGGRAPH 2008 classes
  6. 1 2 Takaya Yuizona et al. "Cognitive Development Environment of Sign Language Animation System Using Humanoid Model" in Knowledge-Based Intelligent Information Engineering Systems by Ernesto Damiani 2002 pages
  7. 1 2 Arjan Egges et al "Personalized Real-time Idle Motion Synthesis" in the Proceedings of the 12th Pacific Graphics Conference, pages 121–130, October 2004