Thad Starner

Last updated
Thad Starner
Known for wearable computing

Thad Eugene Starner is a founder and director of the Contextual Computing Group at Georgia Tech's College of Computing, where he is a full professor. He is a pioneer of wearable computing as well as human-computer interaction, augmented environments, and pattern recognition. [1] [2] Starner is a strong advocate of continuous-access, everyday-use systems, and has worn his own customized wearable computer continuously since 1993. His work has touched on handwriting and sign-language analysis, intelligent agents and augmented realities. He also helped found Charmed Technology.

Contents

Biography

Education

Starner graduated from Dallastown Area High School in York PA in 1987 with honors. He won a talent show in technological science for one of the first AI puzzle-solving PC computer simulations in 1986 before high school graduation gaining him early recognition. Starner graduated from the Massachusetts Institute of Technology with a B.S. in Brain and Cognitive Science (1991), a B.S. in Computer Science (1991), a M.S. in Media Arts and Science, and finally a Ph.D. in Media Arts and Sciences (1999) from the MIT Media Laboratory. His doctoral work was entitled "Wearable Computing and Contextual Awareness," dealing with pattern recognition and how wearable computing can be utilized for purposes such as recognizing hand motions used in American Sign Language. [3] [4] [5]

Wearable computing

Starner is probably most well known for being a strong advocate for wearable computing. During his time at the MIT Media Lab, Starner, already responsible for helping create one of the earliest high-accuracy online cursive handwriting recognition systems in 1993 [1] [5] as an associate scientist with BBN's Speech Systems Group, became one of the world's leading experts on the subject. Starner is also a co-founder of the IEEE International Symposium on Wearable Computers (ISWC) and co-founder and first member of the MIT Wearable Computing Project, where he was one of the first 6 cyborgs involved. [1] Since 1993, Starner has been wearing his own customized wearable computer system full-time, arguably one of the longest, if not the longest, such instances. He designed the hardware for his system, dubbed "The Lizzy", based on designs of the wearable "hip PC" designed by Doug Platt, who built Starner's original wearable. The original system consisted of custom parts from a kit made by Park Enterprises, a Private Eye display, and a Twiddler chorded keyboard. [6] [7] [8] As of January 29, 2008, Starner's setup has evolved to include a heads-up display showing 640x480 screen resolution, a Twiddler, and an OQO Model 1 Ultra-Mobile PC (though the specifications listed suggest an OQO Model 01+) with a GHz processor, 512 MB of RAM, 30GB hard disk, USB2, Firewire, and Wi-Fi built in, as well as a mobile phone with cellular Internet access as well. [9]

Some of the benefits he receives from wearing a computer include being able to type and access the Internet while walking around or talking to others, allowing him to take notes on a conversation in real-time, opening up notes on a certain subject and e-mailing them at any time or even having two conversations at once, one online and one face to face, and if he comes across something he doesn't know or recognize, he can instantly find out. [9] In addition to augmenting the outside world, having a computer on at all times improves Starner's nerves while giving talks; Starner has a speech impediment but is able to speak more clearly when prompted by a computer. [10]

Starner is a Technical Lead/Manager on Google's Project Glass wearable computing project. [11]

Other research

One of his prominent research focuses is the involvement of wearable computing with American Sign Language (ASL). His work intends to create a bridge between the deaf and hearing communities that will facilitate communication between the two using an ASL-to-English one-way translator. Starner is also researching dual-purpose speech, where a wearable computer will be able to interpret certain speech patterns and bring up appropriate programs, such as a calendar for scheduling appointments. [5] [9] [12] In addition, Starner has been involved in the Aware Home project, which uses technology to create an interactive and personalizable environment within the home that would benefit individuals who wouldn't normally be able to live independently. [13]

Starner's preeminence in his field earned him a spot among Technology Review's TR100 top 100 remarkable innovators (now renamed TR35 and limited to thirty-five winners) in 1999. [2] His work has featured on CBS's 60 Minutes, CNN, BBC, and The Wall Street Journal and has been demonstrated to a number of Fortune 500 companies including Merrill Lynch, IBM, and Motorola. [1]

Related Research Articles

<span class="mw-page-title-main">Chorded keyboard</span> Computer input device

A keyset or chorded keyboard is a computer input device that allows the user to enter characters or commands formed by pressing several keys together, like playing a "chord" on a piano. The large number of combinations available from a small number of keys allows text or commands to be entered with one hand, leaving the other hand free. A secondary advantage is that it can be built into a device that is too small to contain a normal-sized keyboard.

<span class="mw-page-title-main">Wearable computer</span> Small computing device worn on the body

A wearable computer, also known as a body-borne computer, is a computing device worn on the body. The definition of 'wearable computer' may be narrow or broad, extending to smartphones or even ordinary wristwatches.

<span class="mw-page-title-main">Optical character recognition</span> Computer recognition of visual text

Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene photo or from subtitle text superimposed on an image.

<span class="mw-page-title-main">Steve Mann (inventor)</span> Professor and wearable computing researcher

William Stephen George Mann is a Canadian engineer, professor, and inventor who works in augmented reality, computational photography, particularly wearable computing, and high-dynamic-range imaging. Mann is sometimes labeled the "Father of Wearable Computing" for early inventions and continuing contributions to the field. He cofounded InteraXon, makers of the Muse brain-sensing headband, and is also a founding member of the IEEE Council on Extended Intelligence (CXI). Mann is currently CTO and cofounder at Blueberry X Technologies and Chairman of MannLab. Mann was born in Canada, and currently lives in Toronto, Canada, with his wife and two children. In 2023, Mann unsuccessfully ran for mayor of Toronto.

<span class="mw-page-title-main">J. C. R. Licklider</span> American psychologist and computer scientist (1915-1990)

Joseph Carl Robnett Licklider, known simply as J. C. R. or "Lick", was an American psychologist and computer scientist who is considered to be among the most prominent figures in computer science development and general computing history.

In computer science, interactive computing refers to software which accepts input from the user as it runs.

<span class="mw-page-title-main">Intelligence amplification</span> Use of information technology to augment human intelligence

Intelligence amplification (IA) refers to the effective use of information technology in augmenting human intelligence. The idea was first proposed in the 1950s and 1960s by cybernetics and early computer pioneers.

<span class="mw-page-title-main">Rosalind Picard</span> American computer scientist

Rosalind Wright Picard is an American scholar and inventor who is Professor of Media Arts and Sciences at MIT, founder and director of the Affective Computing Research Group at the MIT Media Lab, and co-founder of the startups Affectiva and Empatica.

Adam Dunkels is a Swedish computer scientist, computer programmer, entrepreneur, and founder of Thingsquare, an Internet of things (IoT) product development business.

Augmented cognition is an interdisciplinary area of psychology and engineering, attracting researchers from the more traditional fields of human-computer interaction, psychology, ergonomics and neuroscience. Augmented cognition research generally focuses on tasks and environments where human–computer interaction and interfaces already exist. Developers, leveraging the tools and findings of neuroscience, aim to develop applications which capture the human user's cognitive state in order to drive real-time computer systems. In doing so, these systems are able to provide operational data specifically targeted for the user in a given context. Three major areas of research in the field are: Cognitive State Assessment (CSA), Mitigation Strategies (MS), and Robust Controllers (RC). A subfield of the science, Augmented Social Cognition, endeavours to enhance the "ability of a group of people to remember, think, and reason."

<span class="mw-page-title-main">SixthSense</span> Gesture-based wearable computer system

SixthSense is a gesture-based wearable computer system developed at MIT Media Lab by Steve Mann in 1994 and 1997, and 1998, and further developed by Pranav Mistry, in 2009, both of whom developed both hardware and software for both headworn and neckworn versions of it. It comprises a headworn or neck-worn pendant that contains both a data projector and camera. Headworn versions were built at MIT Media Lab in 1997 that combined cameras and illumination systems for interactive photographic art, and also included gesture recognition.

<span class="mw-page-title-main">Desney Tan</span>

Desney Tan is vice president and managing director of Microsoft Health Futures, a cross-organizational incubation group that serves as Microsoft's Health and Life Science "moonshot factory". He also holds an affiliate faculty appointment in the Department of Computer Science and Engineering at the University of Washington, serves on the Board of Directors for ResMed, is senior advisor and chief technologist for Seattle-based life science incubator IntuitiveX, advises multiple startup companies, and is an active startup and real estate investor.

Erik Winfree is an American applied computer scientist, bioengineer, and professor at California Institute of Technology. He is a leading researcher into DNA computing and DNA nanotechnology.

<span class="mw-page-title-main">Shwetak Patel</span> American computer scientist and entrepreneur

Shwetak Naran Patel is an American computer scientist and entrepreneur best known for his work on developing novel sensing solutions and ubiquitous computing. He is the Washington Research Foundation Entrepreneurship Endowed Professor at the University of Washington in Computer Science & Engineering and Electrical Engineering, where he joined in 2008. His technology start-up company on energy sensing, Zensi, was acquired by Belkin International, Inc. in 2010. He was named a 2011 MacArthur Fellow. In 2016, He was elected as an ACM Fellow for contributions to sustainability sensing, low-power wireless sensing and mobile health and received Presidential Early Career Award for Scientists and Engineers (PECASE). He was named the recipient of the 2018 ACM Prize in Computing for contributions to creative and practical sensing systems for sustainability and health.

Hossein Rahnama is a Canadian computer scientist, specialising in ubiquitous and pervasive computing. His research explores artificial intelligence, mobile human-computer interaction, and the effective design of contextual services. In 2017, Rahnama was included in Caldwell Partners' list of "Canada’s Top 40 Under 40". In 2012, he was recognized by the MIT Technology Review as one of the world’s top innovators under the age of 35 for his research in context-aware computing. The Smithsonian named Rahnama as one of the top six innovators to watch in 2013. Rahnama has 30 publications and 10 patents in ubiquitous computing, serves on the board of Canadian Science Publishing, and was a Council Member of the National Sciences and Engineering Research Council (NSERC). Rahnama is also a visiting scholar at the Human Dynamics group at MIT Media Lab in Cambridge, MA. He has a PhD in Computer Science from Ryerson University. Rahmnama is an associate professor in Toronto Metropolitan University's RTA School of Media and Director of Research & Innovation at the university's Digital Media Zone.

<span class="mw-page-title-main">Irfan Essa</span>

Irfan Aziz Essa is a professor in the School of Interactive Computing of the College of Computing, and adjunct professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He is an associate dean in Georgia Tech's College of Computing and the director of the new Interdisciplinary Research Center for Machine Learning at Georgia Tech (ML@GT).

<span class="mw-page-title-main">Suranga Nanayakkara</span> Sri-lankan computer scientist inventor (born 1981)

Suranga Nanayakkara is a Sri Lankan born computer scientist and inventor. As of 2021, he is the director of Augmented Human Lab and associate professor at the National University of Singapore. Before moving to Auckland in 2018, he was an assistant professor at Singapore University of Technology and Design. He is best known for his work on FingerReader and Haptic Chair. His research interests include Wearable Computing, Assistive Technology, Ubiquitous computing, AI, Collective intelligence and Robotics. MIT Technology Review honored Nanayakkara as one of the Innovators Under 35 for Asia Pacific Region 2014.

The machine translation of sign languages has been possible, albeit in a limited fashion, since 1977. When a research project successfully matched English letters from a keyboard to ASL manual alphabet letters which were simulated on a robotic hand. These technologies translate signed languages into written or spoken language, and written or spoken language to sign language, without the use of a human interpreter. Sign languages possess different phonological features than spoken languages, which has created obstacles for developers. Developers use computer vision and machine learning to recognize specific phonological parameters and epentheses unique to sign languages, and speech recognition and natural language processing allow interactive communication between hearing and deaf people.

Ronjon Nag is a British-American inventor and entrepreneur specializing in the field of mobile technology. He co-founded the technology company Lexicus, acquired by Motorola in 1993 and Cellmania, acquired by Research in Motion in 2010. He later served as Vice-President of both Motorola and BlackBerry.

Tanzeem Khalid Choudhury is the Roger and Joelle Burnell Professor in Integrated Health and Technology at Cornell Tech. Her research work is primarily in the area of mHealth.

References

  1. 1 2 3 4 "Thad Starner Home CV" . Retrieved 2009-01-26.
  2. 1 2 "TR35: Thad Starner". MIT Technology Review: TR35. Retrieved 2009-01-26.
  3. "Thad Starner's old MIT profile". Archived from the original on March 20, 2008. Retrieved 2009-01-26.
  4. "Thad Starner Bio". PBS Ask the Scientists. Retrieved 2009-01-26.
  5. 1 2 3 "Lecturer at NJIT Asks is Time Right Yet for Wearable Computers?" (Press release). New Jersey Institute of Technology. 2004-04-07. Archived from the original on 2006-09-18. Retrieved 2009-01-27.
  6. Mann, Steve. ""wearhow.html" How to build a version of 'WearComp6'" . Retrieved 2009-01-30.
  7. Rhodes, Bradley. "A brief history of wearable computing" . Retrieved 2009-01-30.
  8. "The Lizzy" . Retrieved 2009-01-28.
  9. 1 2 3 "Gartner's Interview with Thad Starner". Gartner. 2008-01-29. Archived from the original on November 15, 2008. Retrieved 2009-01-30.
  10. Bass, Thomas A. (1998-04-01). "Dress Code". Wired. Retrieved 2009-01-27.
  11. "Thad Starner Home".
  12. "Face-to-Face Meetings". Research Horizons. 2003-12-11. Retrieved 2009-01-28.
  13. http://awarehome.imtc.gatech.edu/