Bert Schiettecatte

Last updated

Bert Schiettecatte (born January 1, 1979) is a Belgian entrepreneur who created the Audiocubes.

Contents

Biography

Bert Schiettecatte was born in Ghent, Belgium. He has an electronic music production background . He holds an MSc in computer science from the University of Brussels (VUB) and an MA/MST of Arts in Music, Science and Technology from CCRMA, a department at Stanford University (BAEF grant).

While studying at CCRMA, Bert Schiettecatte developed a strong interest in hardware engineering, electronics, and human-computer interaction. Together with Eto Otitigbe and Luigi Castelli, Bert Schiettecatte created a customized dance pad [1] and a laser harp [2] (such as the one of Jean-Michel Jarre), at CCRMA.

After taking several research positions, Bert founded his company Percussa in 2004 to develop tangible user interface technology for music making. Percussa's first product, Audiocubes, was launched in January 2007. [3]

Awards

Publications

Related Research Articles

<span class="mw-page-title-main">Electronic musical instrument</span> Musical instrument that uses electronic circuits to generate sound

An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.

<span class="mw-page-title-main">Interactive art</span> Creative works that rely on viewer input and feedback to provoke emotional responses

Interactive art is a form of art that involves the spectator in a way that allows the art to achieve its purpose. Some interactive art installations achieve this by letting the observer walk through, over or around them; others ask the artist or the spectators to become part of the artwork in some way.

<span class="mw-page-title-main">ChucK</span> Audio programming language

ChucK is a concurrent, strongly timed audio programming language for real-time synthesis, composition, and performance, which runs on Linux, Mac OS X, Microsoft Windows, and iOS. It is designed to favor readability and flexibility for the programmer over other considerations such as raw performance. It natively supports deterministic concurrency and multiple, simultaneous, dynamic control rates. Another key feature is the ability to live code; adding, removing, and modifying code on the fly, while the program is running, without stopping or restarting. It has a highly precise timing/concurrency model, allowing for arbitrarily fine granularity. It offers composers and researchers a powerful and flexible programming tool for building and experimenting with complex audio synthesis programs, and real-time interactive control.

<span class="mw-page-title-main">Gesture recognition</span> Topic in computer science and language technology

Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. One area of the field is emotion recognition derived from facial expressions and hand gestures. Users can make simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language, however, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition is a path for computers to begin to better understand and interpret human body language, previously not possible through text or unenhanced graphical (GUI) user interfaces.

In computer science, interactive computing refers to software which accepts input from the user as it runs.

<span class="mw-page-title-main">Tangible user interface</span>

A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials.

<span class="mw-page-title-main">Hiroshi Ishii (computer scientist)</span> Japanese computer scientist

Hiroshi Ishii is a Japanese computer scientist. He is a professor at the Massachusetts Institute of Technology. Ishii pioneered the Tangible User Interface in the field of Human-computer interaction with the paper "Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms", co-authored with his then PhD student Brygg Ullmer.

<span class="mw-page-title-main">Reactable</span> Electronic musical instrument

The Reactable is an electronic musical instrument with a tabletop tangible user interface that was developed within the Music Technology Group at the Universitat Pompeu Fabra in Barcelona, Spain by Sergi Jordà, Marcos Alonso, Martin Kaltenbrunner and Günter Geiger.

<span class="mw-page-title-main">Multi-touch</span> Technology

In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. A form of gesture recognition, capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures.

<span class="mw-page-title-main">T. V. Raman</span> Indian computer scientist

T. V. Raman is a computer scientist who specializes in accessibility research. His research interests are primarily in the areas of auditory user interfaces and structured electronic documents. He has worked on speech interaction and markup technologies in the context of the World Wide Web at Digital's Cambridge Research Lab (CRL), Adobe Systems and IBM Research. He currently works at Google Research. Raman has himself been partially sighted since birth, and blind since the age of 14.

I-CubeX comprises a system of sensors, actuators and interfaces that are configured by a personal computer. Using MIDI, Bluetooth or the Universal Serial Bus (USB) as the basis for all communication, the complexity is managed behind a variety of software tools, including an end-user configuration editor, Max (software) plugins, and a C++ Application Programming Interface (API), which allows applications to be developed in Mac OS X, Linux and Windows operating systems.

<span class="mw-page-title-main">Bill Verplank</span>

William "Bill" Lawrence Verplank is a designer and researcher who focuses on interactions between humans and computers. He is one of the pioneers of interaction design, a field of design that focuses on users and technology, and a term he helped coin in the 1980s. He was previously a visiting scholar at Stanford University's CCRMA and was involved in Stanford's d.school. He also teaches and lectures internationally on interaction design.

Siftables are small computers that display graphics on their top surface and sense one another and how they are being moved. Siftables were developed as a platform for hands-on interactions with digital information and media and were the prototype for Sifteo cubes.

Sergi Jordà is a Catalan innovator, installation artist, digital musician and Associate Professor at the Music Technology Group, Universitat Pompeu Fabra in Barcelona. He is best known for directing the team that invented the Reactable. He is also a trained Physicist.

Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.

Albrecht Schmidt is a computer scientist best known for his work in ubiquitous computing, pervasive computing, and the tangible user interface. He is a professor at Ludwig Maximilian University of Munich where he joined the faculty in 2017.

<span class="mw-page-title-main">François Pachet</span> French computer scientist (born 1964)

François Pachet is a French scientist, composer and director of the Spotify Creator Technology Research Lab. Before joining Spotify he led Sony Computer Science Laboratory in Paris. He is one of the pioneers of computer music closely linked to artificial intelligence, especially in the field of machine improvisation and style modelling. He has been elected ECCAI Fellow in 2014.

<span class="mw-page-title-main">Audiocubes</span> Musical instrument

AudioCubes are a collection of wireless intelligent light emitting objects, capable of detecting each other's location and orientation, and user gestures. They were created by Bert Schiettecatte. They are an electronic musical instrument used by electronic musicians for live performance, sound design, music composition, and creating interactive applications in max/msp, pd and C++.

Dhairya Dand FRSA is an Indian-born, American inventor and artist based in New York City.

Leonello Tarabella is an Italian researcher, musician and composer. His activity runs on the academic/artistic double track.

References