Bruno Zamborlin | |
---|---|
![]() | |
Born | July 1983 (age 40) |
Alma mater | Goldsmiths, University of London IRCAM - Centre Pompidou |
Occupation(s) | Entrepreneur, artist |
Bruno Zamborlin (born 1983 in Vicenza) is an AI researcher, entrepreneur and artist based in London, working in the field of human-computer interaction. [1] [2] [3] His work focuses on converting physical objects into touch-sensitive, interactive surfaces using vibration sensors and artificial intelligence. In 2013 he founded Mogees Limited [4] a start-up to transform everyday objects into musical instruments and games using a vibration sensor and a mobile phone. With HyperSurfaces, [5] he converts physical surfaces of any material, shape and form into data-enabled-interactive surfaces using a vibration sensor and a coin-sized chipset. As an artist, he has created art installations around the world, with his most recent work comprising a unique series of "sound furnitures" that was showcased at the Italian Pavilion of the Venice Biennale 2023 [6] . He regularly performed with UK-based electronic music duo Plaid (Warp Records). He is also honorary visiting research fellow at Goldsmiths, University of London. [7]
From 2008-2011, Zamborlin worked at the IRCAM (Institute for Research and Coordination Acoustic Musical) – Centre Pompidou as a member of the Sound Music Movement Interaction team. [8] Under the supervision of Frederic Bevilacqua, he started experimenting with the use of artificial intelligence and human movements, [9] and contributed to the creation of Gesture Follower, [10] [11] a software used to analyse body movements of performers and dancers through motion sensors in order to control sound and visual media in real-time, slowing down or speeding up their reproduction based on the speed the gestures are performed. [12] [13]
He has lived in London since 2011, where he developed a joint PhD between Goldsmiths, University of London and IRCAM - Centre Pompidou/Pierre and Marie Curie University Paris in AI, [14] focussing on the concept of Interactive Machine Learning [15] applied to digital musical instruments and performing arts. [16]
Zamborlin founded Mogees Limited in 2013 in London, with IRCAM being amongst the early partners. [17] Mogees transform physical objects into musical instruments and games using a vibration sensor and a series of apps for smartphones and desktop. [18] [19] [20] [21] [22] [23] After a campaign on Kickstarter in 2014, [24] Mogees was used both by common users [25] and artists such as Rodrigo y Gabriela, [26] Jean-Michel Jarre [27] and Plaid. [28] [29] The algorithms implemented in these apps employ a special version of physical modelling sound synthesis, where the vibration produced by users when interacting with the physical object are used as exciter for a digital resonator which runs in the app. The result is a hybrid, half acoustic and half digital sound which is a function of both software and acoustic properties of the physical object the users decide to play. [30]
In 2017, Zamborlin founded HyperSurfaces together with computational artist Parag K Mital. [31] to merge "the physical and the digital worlds". HyperSurfaces technology converts any surface made of any material, shape and size into data-enabled interactive objects, employing a vibration sensor and proprietary AI algorithms running on a coin-sized chipset. [32] The vibrations generated by people's interactions on the surface are converted into an electric signal by a piezoelectric sensor and analysed in realtime by AI algorithms that run on the chipset. Anytime the AI recognises in the vibration signal one of the events that have been predefined by the user beforehand, a corresponding notification message is generated in realtime and sent to some application. [33] The technology can be applied to anything ranging from button-less human-computer interaction applications for automotive and smart home to the Internet of things. [34] [35] [36] [37] Because the AI algorithms employed by HyperSurfaces run locally on a chipset, without the need to access cloud-based services, they are considered to be part of the field of edge computing. Also, because the AI can be trained beforehand to recognise the events its users are interested in, HyperSurfaces algorithms belong to the field of supervised machine learning. [38] [39]
In computing, a pointing device gesture or mouse gesture is a way of combining pointing device or finger movements and clicks that the software recognizes as a specific computer event and responds to accordingly. They can be useful for people who have difficulties typing on a keyboard. For example, in a web browser, a user can navigate to the previously viewed page by pressing the right pointing device button, moving the pointing device briefly to the left, then releasing the button.
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.
A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials.
Ben Shneiderman is an American computer scientist, a Distinguished University Professor in the University of Maryland Department of Computer Science, which is part of the University of Maryland College of Computer, Mathematical, and Natural Sciences at the University of Maryland, College Park, and the founding director (1983-2000) of the University of Maryland Human-Computer Interaction Lab. He conducted fundamental research in the field of human–computer interaction, developing new ideas, methods, and tools such as the direct manipulation interface, and his eight rules of design.
Multimodal interaction provides the user with multiple modes of interacting with a system. A multimodal interface provides several distinct tools for input and output of data.
In artificial intelligence, an embodied agent, also sometimes referred to as an interface agent, is an intelligent agent that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are also called embodied agents, although they have only virtual, not physical, embodiment. A branch of artificial intelligence focuses on empowering such agents to interact autonomously with human beings and the environment. Mobile robots are one example of physically embodied agents; Ananova and Microsoft Agent are examples of graphically embodied agents. Embodied conversational agents are embodied agents that are capable of engaging in conversation with one another and with humans employing the same verbal and nonverbal means that humans do.
STEIM was a center for research and development of new musical instruments in the electronic performing arts, located in Amsterdam, Netherlands. Beginning in the 1970's, STEIM became known as a pioneering center for electronic music, where the specific context of electronic music was always strongly related to the physical and direct actions of a musician. In this tradition, STEIM supported artists in residence such as composers and performers, but also multimedia and video artists, helping them to develop setups which allowed for bespoke improvisation and performance with individually designed technology.
In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Multi-touch may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures using gesture recognition.
The Hyperbow is an electronic violin bow interface that was developed as a result of an in-depth research project by students at MIT. The instrument is intended for use only by accomplished players and was designed to amplify their gestures, which lead to supplementary sound or musical control possibilities. It offers the violin player a range of expressive possibilities in its form as an augmented bow controller that lends itself to the control of bowed string physical models.
Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots.
In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles furnitures.
Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".
Sketch recognition describes the process by which a computer, or artificial intelligence can interpret hand-drawn sketches created by a human being, or other machine. Sketch recognition is a key frontier in the field of artificial intelligence and human-computer interaction, similar to natural language processing or conversational artificial intelligence
In the field of gesture recognition and image processing, finger tracking is a high-resolution technique developed in 1969 that is employed to know the consecutive position of the fingers of the user and hence represent objects in 3D. In addition to that, the finger tracking technique is used as a tool of the computer, acting as an external device in our computer, similar to a keyboard and a mouse.
Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.
PrimeSense was an Israeli 3D sensing company based in Tel Aviv. PrimeSense had offices in Israel, North America, Japan, Singapore, Korea, China and Taiwan. PrimeSense was bought by Apple Inc. for $360 million on November 24, 2013.
Andrea Cera is an Italian electroacoustic composer, sound designer and sound installation artist.
uSens, Inc. is a Silicon Valley startup founded in 2014 in San Jose, California. The company's core team includes researchers and developers who build interactive and immersive computer-vision tracking solutions. The uSens team has extensive experience in artificial intelligence (AI), computer vision, 3D Human–computer interaction (HCI) technology and augmented reality and virtual reality. uSens has been applying computer vision and AI technologies in AR/VR, Automotive and smartphones.
Force Touch is a haptic technology developed by Apple Inc. that enables trackpads and touchscreens to distinguish between various levels of force being applied to their surfaces. It uses pressure sensors to add another method of input to Apple's devices. The technology was first unveiled on September 9, 2014, during the introduction of Apple Watch. Starting with the Apple Watch, Force Touch has been incorporated into many products within Apple's lineup. This notably includes MacBooks and the Magic Trackpad 2. The technology is known as 3D Touch on the iPhone models. The technology brings usability enhancements to the software by offering a third dimension to accept input. Accessing shortcuts, previewing details, drawing art and system wide features enable users to additionally interact with the displayed content by applying force on the input surface.
Chris Harrison is a British-born, American computer scientist and entrepreneur, working in the fields of human–computer interaction, machine learning and sensor-driven interactive systems. He is a professor at Carnegie Mellon University and director of the Future Interfaces Group within the Human–Computer Interaction Institute. He has previously conducted research at AT&T Labs, Microsoft Research, IBM Research and Disney Research. He is also the CTO and co-founder of Qeexo, a machine learning and interaction technology startup.