James Gips

Last updated
James Gips
NationalityAmerican
Alma materMIT, Stanford University

James Gips (died June 10, 2018) [1] was an American technologist, academic, and author based in Boston. He was the John R. and Pamela Egan Professor of computer science and professor of information systems at Boston College. [1] [2]

Contents

Gips’ research was focused around the use of technology to help people with disabilities live fuller lives. He was the co-inventor and principal developer of two assistive technologies, EagleEyes and Camera Mouse. [3] [4] Gips has written on a variety of topics including ethical robots, shape grammars and aesthetics. [5]

In 2007, Gips won the da Vinci Award for exceptional design and engineering achievements in accessibility and universal design. [6]

Gips died June 10, 2018, aged 72.

Education

After completing his B.S. in Humanities and Engineering from MIT in 1967, Gips joined Stanford University for M.S. in Computer Science, which he completed in 1968. [7] Subsequently, he joined the National Institute of Health, Bethesda, as Officer, U.S. Public Health Service and worked there until 1970. In 1970, he invented shape grammars with George Stiny. He returned for a Ph.D. in Computer Science at Stanford, completing it in 1974. His Ph.D. dissertation, “Shape Grammars and Their Uses,” was published as a book. [4]

Career

Gips joined University of California, Los Angeles in 1974 as Assistant Research Computer Scientist. While he was working there, he wrote the book “Algorithmic Aesthetics” with George Stiny. In 1976, he left UCLA and started teaching at Boston College. In 1979, while still teaching at Boston College, Gips joined Harvard University Summer School as an associate professor and taught there until 1983. In 1993, Gips along with Peter Olivieri and Joseph Tecce developed EagleEyes, a technology that allows disabled people to use a mouse pointer on a computer screen just by moving their eyes. [2] [8] EagleEyes uses electrodes placed on the persons head to move the mouse pointer by following their eye movements. [9] EagleEyes was a finalist for the Discover Magazine Technological Innovation Awards in 1994 [10] and in 2006 was named a Tech Museum Award Laureate by the Tech Museum of San Jose. [11]

While working on a successor for EagleEyes, Gips and Margrit Betke thought of a program that would allow people to use a mouse with the movement of their head. The idea resulted in Camera Mouse. [12] [13] The application uses a standard webcam to track head movements and move the mouse pointer accordingly. A free public version of Camera Mouse was launched in 2007 and has over 3,000,000 downloads. [14]

Gips has worked with S. Adam Brasel on research on technology [15] and consumer behavior. [16] [17]

Research Awards

Bibliography

Books

Selected articles

Related Research Articles

<span class="mw-page-title-main">Cognitive science</span> Interdisciplinary scientific study of cognitive processes

Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition. Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."

Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

<span class="mw-page-title-main">Pointing device gesture</span>

In computing, a pointing device gesture or mouse gesture is a way of combining pointing device or finger movements and clicks that the software recognizes as a specific computer event and responds to accordingly. They can be useful for people who have difficulties typing on a keyboard. For example, in a web browser, a user can navigate to the previously viewed page by pressing the right pointing device button, moving the pointing device briefly to the left, then releasing the button.

<span class="mw-page-title-main">Scrollbar</span> Graphical user interface element

A scrollbar is an interaction technique or widget in which continuous text, pictures, or any other content can be scrolled in a predetermined direction on a computer display, window, or viewport so that all of the content can be viewed, even if only a fraction of the content can be seen on a device's screen at one time. It offers a solution to the problem of navigation to a known or unknown location within a two-dimensional information space. It was also known as a handle in the very first GUIs. They are present in a wide range of electronic devices including computers, graphing calculators, mobile phones, and portable media players. The user interacts with the scrollbar elements using some method of direct action, the scrollbar translates that action into scrolling commands, and the user receives feedback through a visual updating of both the scrollbar elements and the scrolled content.

In computer science, declarative programming is a programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow.

In computer science, human–computer interaction, and interaction design, direct manipulation is an approach to interfaces which involves continuous representation of objects of interest together with rapid, reversible, and incremental actions and feedback. As opposed to other interaction styles, for example, the command language, the intention of direct manipulation is to allow a user to manipulate objects presented to them, using actions that correspond at least loosely to manipulation of physical objects. An example of direct manipulation is resizing a graphical shape, such as a rectangle, by dragging its corners or edges with a mouse.

<span class="mw-page-title-main">Drag and drop</span> Action in computer graphic user interfaces

In computer graphical user interfaces, drag and drop is a pointing device gesture in which the user selects a virtual object by "grabbing" it and dragging it to a different location or onto another virtual object. In general, it can be used to invoke many kinds of actions, or create various types of associations between two abstract objects.

The following outline is provided as an overview of and topical guide to human–computer interaction:

A knowledge-based system (KBS) is a computer program that reasons and uses a knowledge base to solve complex problems. The term is broad and refers to many different kinds of systems. The one common theme that unites all knowledge based systems is an attempt to represent knowledge explicitly and a reasoning system that allows it to derive new knowledge. Thus, a knowledge-based system has two distinguishing features: a knowledge base and an inference engine.

In human–computer interaction, a cursor is an indicator used to show the current position on a computer monitor or other display device that will respond to input.

Maurice Binder was an American film title designer best known for his work on 16 James Bond films—including the first, Dr. No (1962)—and for Stanley Donen's films from 1958.

<span class="mw-page-title-main">ClickOnce</span>

ClickOnce is a component of Microsoft .NET Framework 2.0 and later, and supports deploying applications made with Windows Forms or Windows Presentation Foundation. It is similar to Java Web Start for the Java Platform or Zero Install for Linux.

<span class="mw-page-title-main">Richard F. Lyon</span> American inventor

Richard "Dick" Francis Lyon is an American inventor, scientist, and engineer. He is one of the two people who independently invented the first optical mouse devices in 1980. He has worked in signal processing and was a co-founder of Foveon, Inc., a digital camera and image sensor company.

Shape grammars in computation are a specific class of production systems that generate geometric shapes. Typically, shapes are 2- or 3-dimensional, thus shape grammars are a way to study 2- and 3-dimensional languages. Shape grammars were first introduced in a seminal article by George Stiny and James Gips in 1971. The mathematical and algorithmic foundations of shape grammars were developed in "Pictorial and Formal Aspects of Shapes and Shape Grammars" by George Stiny. Applications of shape grammars were first considered in "Shape Grammars and their Uses" by James Gips. These publications also contain two independent, though equivalent, constructions showing that shape grammars can simulate Turing machines.

Elliot Bruce Koffman is a noted computer scientist and educationist. He is the author of numerous widely used introductory textbooks for more than 10 different programming languages, including Ada, BASIC, C, C++, FORTRAN, Java, Modula-2, and Pascal. Since 1974, he has been a professor of computer and information sciences at Temple University, Philadelphia, Pennsylvania.

Gender HCI is a subfield of human-computer interaction that focuses on the design and evaluation of interactive systems for humans. The specific emphasis in gender HCI is on variations in how people of different genders interact with computers.

Eye–hand coordination is the coordinated motor control of eye movement with hand movement and the processing of visual input to guide reaching and grasping along with the use of proprioception of the hands to guide the eyes, a modality of multisensory integration. Eye–hand coordination has been studied in activities as diverse as the movement of solid objects such as wooden blocks, archery, sporting performance, music reading, computer gaming, copy-typing, and even tea-making. It is part of the mechanisms of performing everyday tasks; in its absence, most people would not be able to carry out even the simplest of actions such as picking up a book from a table.

Visual perception is the ability to interpret the surrounding environment through photopic vision, color vision, scotopic vision, and mesopic vision, using light in the visible spectrum reflected by objects in the environment. This is different from visual acuity, which refers to how clearly a person sees. A person can have problems with visual perceptual processing even if they have 20/20 vision.

<span class="mw-page-title-main">Finger tracking</span> High-resolution technique in gesture recognition and image processing

In the field of gesture recognition and image processing, finger tracking is a high-resolution technique developed in 1969 that is employed to know the consecutive position of the fingers of the user and hence represent objects in 3D. In addition to that, the finger tracking technique is used as a tool of the computer, acting as an external device in our computer, similar to a keyboard and a mouse.

Underwater computer vision is a subfield of computer vision. In recent years, with the development of underwater vehicles, the need to be able to record and process huge amounts of information has become increasingly important. Applications range from inspection of underwater structures for the offshore industry to the identification and counting of fishes for biological research. However, no matter how big the impact of this technology can be to industry and research, it still is in a very early stage of development compared to traditional computer vision. One reason for this is that, the moment the camera goes into the water, a whole new set of challenges appear. On one hand, cameras have to be made waterproof, marine corrosion deteriorates materials quickly and access and modifications to experimental setups are costly, both in time and resources. On the other hand, the physical properties of the water make light behave differently, changing the appearance of a same object with variations of depth, organic material, currents, temperature etc.

References

  1. 1 2 Profile at Boston College
  2. 1 2 "New Computer Gives Disabled a Voice". Boston Herald. November 20, 1994.
  3. Gips, James; Oliveri, Peter; Tecce, Joseph (1993). "Direct Control of the Computer through Electrodes Placed Around the Eyes" (PDF). Human-Computer Interaction: Applications and Case Studies: 630–635.
  4. 1 2 DiMattia, Philip; Curran, Francis X; Gips, James (2000). An Eye Control Teaching Device for Students Without Language Expressive Capacity: EagleEyes. Edwin Mellen Press. ISBN   978-0-7734-7639-4.
  5. "James Gips".
  6. "Software program catches on with disabled individuals". The Boston Globe .
  7. "Meet Professor James Gips and the EagleEyes Project".
  8. "At a Glance, A Computer Comes Alive". The New York Times. August 4, 1996.
  9. DiMattia, Philip; Osborne, Allan (2016). "The Camera Mouse: Visual tracking of body features to provide computer access for people with severe disabilities". IEEE Transactions on Neural Systems and Rehabilitation Engineering. 10 (1): 1–10. doi:10.1109/TNSRE.2002.1021581. ISBN   978-1540461049. PMID   12173734. S2CID   1028698.
  10. "The Camera Mouse: Visual Tracking of Body Features to Provide" (PDF).
  11. "High-tech in service of others".
  12. Gips, James; Betke, Margrit; Fleming, Peter (2000). "The Camera Mouse: Preliminary Investigation of Automated Visual Tracking for Computer Access". Proceedings of RESNA 2000: 98–100.
  13. Betke, Margrit; Gips, James; Fleming, Peter (2002). "The Camera Mouse: Visual Tracking of Body Features to Provide Computer Access for People with Severe Disabilities". IEEE Transactions on Neural Systems and Rehabilitation Engineering. 10 (1): 1–10. CiteSeerX   10.1.1.14.6782 . doi:10.1109/TNSRE.2002.1021581. PMID   12173734. S2CID   1028698.
  14. "Software program catches on with disabled individuals". The Boston Globe. December 23, 2016.
  15. "Why Shopping On A Tablet Makes You More Likely To Buy". Forbes .
  16. Brasel, S. Adam; Gips, James (November 2008). "Breaking Through Fast-Forwarding: Brand Information and Visual Attention". Journal of Marketing. 72 (6): 31–48. doi:10.1509/jmkg.72.6.31.
  17. "How Brand Priming Influences Consumer Behavior".