Daniel Wigdor | |
---|---|
Born | |
Nationality | Canadian, Irish |
Occupation(s) | Computer scientist, entrepreneur, investor, expert witness, and an author |
Academic background | |
Education | Hon.B.S., Computer Science M.Sc., Computer Science Ph.D., Computer Science |
Alma mater | University of Toronto |
Academic work | |
Institutions | University of Toronto |
Daniel Wigdor is a Canadian computer scientist,entrepreneur,investor,expert witness and author. He is the associate chair of Industrial Relations as well as a professor in the Department of Computer Science at the University of Toronto. [1]
Wigdor is most known for his work in Human Computer Interaction,including his work sensing technologies,operating system architectures,AI systems,manufacturing methods,haptic feedback devices,development tools,and software systems. His entrepreneurial endeavors include founding companies,including Iota Wireless,Tactual Labs,and Chatham Labs (sold to Facebook in 2020). Among his authored works are his publications in academic journals,including IEEE Transactions on Visualization and Computer Graphics [2] as well as a book titled Brave NUI World:Designing Natural User Interfaces for Touch and Gesture. [3] Moreover,he is the recipient of 2015 Alfred P. Sloan Research Fellowship in Computer Science. [4]
Wigdor completed his B.S. from the University of Toronto in 2002,followed by an M.S. in Computer Science from the University of Toronto in 2004. Later in 2008,he obtained his Ph.D. in Computer Science from the same institution,while completing a fellowship at Harvard University. [1]
Wigdor co-founded Iota Wireless in 2003,focusing on text-entry techniques for mobile phones,and worked there until 2010. His tenure at Microsoft spanned from 2008 to 2010. During his time at Microsoft,his work focused on creating high-quality user experiences for Natural User Interfaces. [1]
In 2012,Wigdor co-founded Tactual Labs and served as its director and science advisor until 2016. The startup focused on enhancing high-performance user input for interactive computers. The following year,he co-founded Trace,formerly known as Addem Labs,and served as its science advisor and director until 2022. This Toronto-based startup focused on developing rapid prototyping tools for printed circuit boards. Moreover,in 2018,he co-founded Chatham Labs to conduct research on developing operating systems and platforms for ubiquitous computing. There,he held the position of chief scientist until its acquisition by Facebook in September 2020. Following the acquisition,from 2020 to 2023,he served as director of Meta's Reality Labs Research (RLR) Toronto,which was formerly Chatham Labs. [5]
Wigdor holds over 60 patents for projects in human computer interaction. [6]
Wigdor has served as an expert witness in high-profile tech-related cases. Notably,he served as a testifying expert witness for Quinn Emanuel in the Apple Inc. v. Samsung Electronics Co.,Ltd. case in the US District Court for the Northern District of California,where he prepared expert reports and testified on the invalidity and non-infringement of US patent #8,074,172. [7] Moreover,he also served as an expert witness for Qualcomm,providing testimony on Qualcomm's patent portfolio in the ongoing litigation with Apple Inc. represented by Quinn Emanuel Urquhart &Sullivan,LLP. [8]
Wigdor began his academic journey in 1998,joining the University of Toronto as an undergraduate,where he graduated in 2002,before completing his Masters in 2004 and PhD in 2008. Between 2007 and 2008,he was a fellow at the Initiative in Innovative Computing at Harvard University. From 2010 to 2012,he served as an affiliate assistant professor at the University of Washington. In 2011,he joined the University of Toronto as an assistant professor,a position he held until 2016,after which he became an associate professor,serving from 2016 to 2021. He also had a visiting appointment as an associate professor at Cornell University from 2017 to 2018. Since 2020,he has been the associate chair of Industrial Relations in the Department of Computer Science and a professor at the University of Toronto since 2021. [1]
In 2011,Wigdor collaborated with Dennis Wixon to publish the book Brave NUI World:Designing Natural User Interfaces for Touch and Gesture. [3] In addition,he has made contributions to various books,including Tabletops - Horizontal Interactive Displays [9] and Human-Computer Interaction Handbook:Fundamentals,Evolving Technologies,and Emerging Applications. [10]
Wigdor's research spans operating system architectures,sensing methods,AI systems,interaction techniques,and human-AI interaction. In 2003,he introduced TiltText,an innovative,language-independent technique for text entry on mobile phones,offering a potential improvement in typing speed despite its higher error rate,positioning it among the fastest known methods for this purpose. [11] His 2004 work explored interactive 3D visualization and manipulation techniques using volumetric displays,emphasizing direct gestural interaction with virtual objects enabled by finger motion tracking,within a prototype geometric modeling application. [12] Furthermore,he introduced LucidTouch,a mobile device interface solution enabling users to interact with applications by touching the rear of the device. This approach tackled finger occlusion issues and facilitated multi-touch input,with initial study results indicating a user preference for this method due to reduced occlusion,higher precision,and the ability to use multi-finger input. [13] His 2018 joint study with Z Lu and others presented findings from a mixed methods study on live streaming practices in China,including an online survey of 527 users and interviews with 14 active users,revealing insights into content categories,viewer engagement,reward systems,fan group-chat dynamics,and desires for deeper interaction mechanisms among both viewers and streamers. [14] In 2022,he presented an exhaustive examination of existing methodologies for designing gesture vocabularies,identifying 13 crucial factors influencing their design,evaluating associated evaluation methods and interaction techniques,and proposing future research directions for developing a more holistic and user-centered approach to gesture design. [15]
In computing,a pointing device gesture or mouse gesture is a way of combining pointing device or finger movements and clicks that the software recognizes as a specific computer event and responds to accordingly. They can be useful for people who have difficulties typing on a keyboard. For example,in a web browser,a user can navigate to the previously viewed page by pressing the right pointing device button,moving the pointing device briefly to the left,then releasing the button.
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision,it employs mathematical algorithms to interpret gestures.
A voice-user interface (VUI) enables spoken human interaction with computers,using speech recognition to understand spoken commands and answer questions,and typically text to speech to play a reply. A voice command device is a device controlled with a voice user interface.
In computing,multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN,MIT,University of Toronto,Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Multi-touch may be used to implement additional functionality,such as pinch to zoom or to activate certain subroutines attached to predefined gestures using gesture recognition.
An interaction technique,user interface technique or input technique is a combination of hardware and software elements that provides a way for computer users to accomplish a single task. For example,one can go back to the previously visited page on a Web browser by either clicking a button,pressing a key,performing a mouse gesture or uttering a speech command. It is a widely used term in human-computer interaction. In particular,the term "new interaction technique" is frequently used to introduce a novel user interface design idea.
In human–computer interaction,an organic user interface (OUI) is defined as a user interface with a non-flat display. After Engelbart and Sutherland's graphical user interface (GUI),which was based on the cathode ray tube (CRT),and Kay and Weiser's ubiquitous computing,which is based on the flat panel liquid-crystal display (LCD),OUI represents one possible third wave of display interaction paradigms,pertaining to multi-shaped and flexible displays. In an OUI,the display surface is always the focus of interaction,and may actively or passively change shape upon analog inputs. These inputs are provided through direct physical gestures,rather than through indirect point-and-click control. Note that the term "Organic" in OUI was derived from organic architecture,referring to the adoption of natural form to design a better fit with human ecology. The term also alludes to the use of organic electronics for this purpose.
Hands-on computing is a branch of human-computer interaction research which focuses on computer interfaces that respond to human touch or expression,allowing the machine and the user to interact physically. Hands-on computing can make complicated computer tasks more natural to users by attempting to respond to motions and interactions that are natural to human behavior. Thus hands-on computing is a component of user-centered design,focusing on how users physically respond to virtual environments.
In computing,3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.
In computing,a natural user interface (NUI) or natural interface is a user interface that is effectively invisible,and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants,such as Alexa and Siri,touch and multitouch interactions on today's mobile phones and tablets,but also touch interfaces invisibly integrated into the textiles furnitures.
Human–computer interaction (HCI) is research in the design and the use of computer technology,which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".
The DiamondTouch table is a multi-touch,interactive PC interface product from Circle Twelve Inc. It is a human interface device that has the capability of allowing multiple people to interact simultaneously while identifying which person is touching where. The technology was originally developed at Mitsubishi Electric Research Laboratories (MERL) in 2001 and later licensed to Circle Twelve Inc in 2008. The DiamondTouch table is used to facilitate face-to-face collaboration,brainstorming,and decision-making,and users include construction management company Parsons Brinckerhoff,the Methodist Hospital,and the US National Geospatial-Intelligence Agency (NGA).
In computing,scratch input is an acoustic-based method of Human-Computer Interaction (HCI) that takes advantage of the characteristic sound produced when a finger nail or other object is dragged over a surface,such as a table or wall. The technique is not limited to fingers;a stick or writing implements can also be used. The sound is often inaudible to the naked ear. However,specialized microphones can digitize the sounds for interactive purposes. Scratch input was invented by Mann et al. in 2007,though the term was first used by Chris Harrison et al.
The Human Media Lab(HML) is a research laboratory in Human-Computer Interaction at Queen's University's School of Computing in Kingston,Ontario. Its goals are to advance user interface design by creating and empirically evaluating disruptive new user interface technologies,and educate graduate students in this process. The Human Media Lab was founded in 2000 by Prof. Roel Vertegaal and employs an average of 12 graduate students.
Chris Harrison is a British-born,American computer scientist and entrepreneur,working in the fields of human–computer interaction,machine learning and sensor-driven interactive systems. He is a professor at Carnegie Mellon University and director of the Future Interfaces Group within the Human–Computer Interaction Institute. He has previously conducted research at AT&T Labs,Microsoft Research,IBM Research and Disney Research. He is also the CTO and co-founder of Qeexo,a machine learning and interaction technology startup.
Jacob O. Wobbrock is a Professor in the University of Washington Information School and,by courtesy,in the Paul G. Allen School of Computer Science &Engineering at the University of Washington. He is Director of the ACE Lab,Associate Director and founding Co-Director Emeritus of the CREATE research center,and a founding member of the DUB Group and the MHCI+D degree program.
Ken Hinckley is an American computer scientist and inventor. He is a senior principal research manager at Microsoft Research. He is known for his research in human-computer interaction,specifically on sensing techniques,pen computing,and cross-device interaction.
Patrick Baudisch is a computer science professor and the chair of the Human Computer Interaction Lab at Hasso Plattner Institute,Potsdam University. While his early research interests revolved around natural user interfaces and interactive devices,his research focus shifted to virtual reality and haptics in the late 2000s and to digital fabrication,such as 3D Printing and Laser cutting in the 2010s. Prior to teaching and researching at Hasso Plattner Institute,Patrick Baudisch was a research scientist at Microsoft Research and Xerox PARC. He has been a member of CHI Academy since 2013,and an ACM distinguished scientist since 2014. He holds a PhD degree in Computer Science from the Department of Computer Science of the Technische Universität Darmstadt,Germany.
Yves Guiard is a French cognitive neuroscientist and researcher best known for his work in human laterality and stimulus-response compatibility in the field of human-computer interaction. He is the director of research at French National Center for Scientific Research and a member of CHI Academy since 2016. He is also an associate editor of ACM Transactions on Computer-Human Interaction and member of the advisory council of the International Association for the Study of Attention and Performance.
Shumin Zhai is a Chinese-born American Canadian Human–computer interaction (HCI) research scientist and inventor. He is known for his research specifically on input devices and interaction methods,swipe-gesture-based touchscreen keyboards,eye-tracking interfaces,and models of human performance in human-computer interaction. His studies have contributed to both foundational models and understandings of HCI and practical user interface designs and flagship products. He previously worked at IBM where he invented the ShapeWriter text entry method for smartphones,which is a predecessor to the modern Swype keyboard. Dr. Zhai's publications have won the ACM UIST Lasting Impact Award and the IEEE Computer Society Best Paper Award,among others,and he is most known for his research specifically on input devices and interaction methods,swipe-gesture-based touchscreen keyboards,eye-tracking interfaces,and models of human performance in human-computer interaction. Dr. Zhai is currently a Principal Scientist at Google where he leads and directs research,design,and development of human-device input methods and haptics systems.
Joseph J. LaViola Jr. is an American computer scientist,author,consultant,and academic. He holds the Charles N. Millican Professorship in Computer Science and leads the Interactive Computing Experiences Research Cluster at the University of Central Florida (UCF). He also serves as a visiting scholar in the Computer Science Department at Brown University,Consultant at JJL Interface Consultants as well as co-founder of Fluidity Software.