Human-based computation (HBC), human-assisted computation, [1] ubiquitous human computing or distributed thinking (by analogy to distributed computing) is a computer science technique in which a machine performs its function by outsourcing certain steps to humans, usually as microwork. This approach uses differences in abilities and alternative costs between humans and computer agents to achieve symbiotic human–computer interaction. For computationally difficult tasks such as image recognition, human-based computation plays a central role in training Deep Learning-based Artificial Intelligence systems. In this case, human-based computation has been referred to as human-aided artificial intelligence. [2]
In traditional computation, a human employs a computer [3] to solve a problem; a human provides a formalized problem description and an algorithm to a computer, and receives a solution to interpret. [4] Human-based computation frequently reverses the roles; the computer asks a person or a large group of people to solve a problem, [5] then collects, interprets, and integrates their solutions. This turns hybrid networks of humans and computers into "large scale distributed computing networks". [6] [7] [8] where code is partially executed in human brains and on silicon based processors.
Human-based computation (apart from the historical meaning of "computer") research has its origins in the early work on interactive evolutionary computation (EC). [9] The idea behind interactive evolutionary algorithms has been attributed to Richard Dawkins; in the Biomorphs software accompanying his book The Blind Watchmaker (Dawkins, 1986) [10] the preference of a human experimenter is used to guide the evolution of two-dimensional sets of line segments. In essence, this program asks a human to be the fitness function of an evolutionary algorithm, so that the algorithm can use human visual perception and aesthetic judgment to do something that a normal evolutionary algorithm cannot do. However, it is difficult to get enough evaluations from a single human if we want to evolve more complex shapes. Victor Johnston [11] and Karl Sims [12] extended this concept by harnessing the power of many people for fitness evaluation (Caldwell and Johnston, 1991; Sims, 1991). As a result, their programs could evolve beautiful faces and pieces of art appealing to the public. These programs effectively reversed the common interaction between computers and humans. In these programs, the computer is no longer an agent of its user, but instead, a coordinator aggregating efforts of many human evaluators. These and other similar research efforts became the topic of research in aesthetic selection or interactive evolutionary computation (Takagi, 2001), however the scope of this research was limited to outsourcing evaluation and, as a result, it was not fully exploring the full potential of the outsourcing.
A concept of the automatic Turing test pioneered by Moni Naor (1996) [13] is another precursor of human-based computation. In Naor's test, the machine can control the access of humans and computers to a service by challenging them with a natural language processing (NLP) or computer vision (CV) problem to identify humans among them. The set of problems is chosen in a way that they have no algorithmic solution that is both effective and efficient at the moment. If it existed, such an algorithm could be easily performed by a computer, thus defeating the test. In fact, Moni Naor was modest by calling this an automated Turing test. The imitation game described by Alan Turing (1950) didn't propose using CV problems. It was only proposing a specific NLP task, while the Naor test identifies and explores a large class of problems, not necessarily from the domain of NLP, that could be used for the same purpose in both automated and non-automated versions of the test.
Finally, Human-based genetic algorithm (HBGA) [14] encourages human participation in multiple different roles. Humans are not limited to the role of evaluator or some other predefined role, but can choose to perform a more diverse set of tasks. In particular, they can contribute their innovative solutions into the evolutionary process, make incremental changes to existing solutions, and perform intelligent recombination. [15] In short, HBGA allows humans to participate in all operations of a typical genetic algorithm. As a result of this, HBGA can process solutions for which there are no computational innovation operators available, for example, natural languages. Thus, HBGA obviated the need for a fixed representational scheme that was a limiting factor of both standard and interactive EC. [16] These algorithms can also be viewed as novel forms of social organization coordinated by a computer, according to Alex Kosorukoff and David Goldberg. [17]
Human-based computation methods combine computers and humans in different roles. Kosorukoff (2000) proposed a way to describe division of labor in computation, that groups human-based methods into three classes. The following table uses the evolutionary computation model to describe four classes of computation, three of which rely on humans in some role. For each class, a representative example is shown. The classification is in terms of the roles (innovation or selection) performed in each case by humans and computational processes. This table is a slice of a three-dimensional table. The third dimension defines if the organizational function is performed by humans or a computer. Here it is assumed to be performed by a computer.
Innovation agent | |||
---|---|---|---|
Computer | Human | ||
Selection agent | Computer | Genetic algorithm | Computerized tests |
Human | Interactive genetic algorithm | Human-based genetic algorithm |
Classes of human-based computation from this table can be referred by two-letter abbreviations: HC, CH, HH. Here the first letter identifies the type of agents performing innovation, the second letter specifies the type of selection agents. In some implementations (wiki is the most common example), human-based selection functionality might be limited, it can be shown with small h.
In different human-based computation projects people are motivated by one or more of the following.
Many projects had explored various combinations of these incentives. See more information about motivation of participants in these projects in Kosorukoff, [35] and Von Hippel. [36] [37]
Viewed as a form of social organization, human-based computation often surprisingly turns out to be more robust and productive than traditional organizations. [38] The latter depend on obligations to maintain their more or less fixed structure, be functional and stable. Each of them is similar to a carefully designed mechanism with humans as its parts. However, this limits the freedom of their human employees and subjects them to various kinds of stresses. Most people, unlike mechanical parts, find it difficult to adapt to some fixed roles that best fit the organization. Evolutionary human-computation projects offer a natural solution to this problem. They adapt organizational structure to human spontaneity, accommodate human mistakes and creativity, and utilize both in a constructive way. This leaves their participants free from obligations without endangering the functionality of the whole, making people happier. There are still some challenging research problems that need to be solved before we can realize the full potential of this idea.
The algorithmic outsourcing techniques used in human-based computation are much more scalable than the manual or automated techniques used to manage outsourcing traditionally. It is this scalability that allows to easily distribute the effort among thousands (or more) of participants. It was suggested recently that this mass outsourcing is sufficiently different from traditional small-scale outsourcing to merit a new name: crowdsourcing. [39] However, others have argued that crowdsourcing ought to be distinguished from true human-based computation. [40] Crowdsourcing does indeed involve the distribution of computation tasks across a number of human agents, but Michelucci argues that this is not sufficient for it to be considered human computation. Human computation requires not just that a task be distributed across different agents, but also that the set of agents across which the task is distributed be mixed: some of them must be humans, but others must be traditional computers. It is this mixture of different types of agents in a computational system that gives human-based computation its distinctive character. Some instances of crowdsourcing do indeed meet this criterion, but not all of them do.
Human Computation organizes workers through a task market with APIs, task prices, and software-as-a-service protocols that allow employers / requesters to receive data produced by workers directly in to IT systems. As a result, many employers attempt to manage worker automatically through algorithms rather than responding to workers on a case-by-case basis or addressing their concerns. Responding to workers is difficult to scale to the employment levels enabled by human computation microwork platforms. [41] Workers in the system Mechanical Turk, for example, have reported that human computation employers can be unresponsive to their concerns and needs [42]
Human assistance can be helpful in solving any AI-complete problem, which by definition is a task which is infeasible for computers to do but feasible for humans. Specific practical applications include:
Human-based computation has been criticized as exploitative and deceptive with the potential to undermine collective action. [45] [46]
In social philosophy it has been argued that human-based computation is an implicit form of online labour. [47] The philosopher Rainer Mühlhoff distinguishes five different types of "machinic capture" of human microwork in "hybrid human-computer networks": (1) gamification, (2) "trapping and tracking" (e.g. CAPTCHAs or click-tracking in Google search), (3) social exploitation (e.g. tagging faces on Facebook), (4) information mining and (5) click-work (such as on Amazon Mechanical Turk). [48] [49] Mühlhoff argues that human-based computation often feeds into Deep Learning-based Artificial Intelligence systems, a phenomenon he analyzes as "human-aided artificial intelligence".
In the field of artificial intelligence (AI), tasks that are hypothesized to require artificial general intelligence to solve are informally known as AI-complete or AI-hard. Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm.
Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.
A captcha is a type of challenge–response test used in computing to determine whether the user is human in order to deter bot attacks and spam.
In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.
Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.
Collaborative intelligence characterizes multi-agent, distributed systems where each agent, human or machine, is autonomously contributing to a problem solving network. Collaborative autonomy of organisms in their ecosystems makes evolution possible. Natural ecosystems, where each organism's unique signature is derived from its genetics, circumstances, behavior and position in its ecosystem, offer principles for design of next generation social networks to support collaborative intelligence, crowdsourcing individual expertise, preferences, and unique contributions in a problem solving process.
Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.
In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic that may provide a sufficiently good solution to an optimization problem or a machine learning problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems. Their use is always of interest when exact or other (approximate) methods are not available or are not expedient, either because the calculation time is too long or because, for example, the solution provided is too imprecise.
In evolutionary computation, a human-based genetic algorithm (HBGA) is a genetic algorithm that allows humans to contribute solution suggestions to the evolutionary process. For this purpose, a HBGA has human interfaces for initialization, mutation, and recombinant crossover. As well, it may have interfaces for selective evaluation. In short, a HBGA outsources the operations of a typical genetic algorithm to humans.
A multi-agent system is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning.
The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.
Lateral computing is a lateral thinking approach to solving computing problems. Lateral thinking has been made popular by Edward de Bono. This thinking technique is applied to generate creative ideas and solve problems. Similarly, by applying lateral-computing techniques to a problem, it can become much easier to arrive at a computationally inexpensive, easy to implement, efficient, innovative or unconventional solution.
Design Automation usually refers to electronic design automation, or Design Automation which is a Product Configurator. Extending Computer-Aided Design (CAD), automated design and Computer-Automated Design (CAutoD) are more concerned with a broader range of applications, such as automotive engineering, civil engineering, composite material design, control engineering, dynamic system identification and optimization, financial systems, industrial equipment, mechatronic systems, steel construction, structural optimisation, and the invention of novel systems.
In numerical optimization, meta-optimization is the use of one optimization method to tune another optimization method. Meta-optimization is reported to have been used as early as in the late 1970s by Mercer and Sampson for finding optimal parameter settings of a genetic algorithm.
Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.
Microwork is a series of many small tasks which together comprise a large unified project, and it is completed by many people over the Internet. Microwork is considered the smallest unit of work in a virtual assembly line. It is most often used to describe tasks for which no efficient algorithm has been devised, and require human intelligence to complete reliably. The term was developed in 2008 by Leila Chirayath Janah of Samasource.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
This is a chronological table of metaheuristic algorithms that only contains fundamental computational intelligence algorithms. Hybrid algorithms and multi-objective algorithms are not listed in the table below.
{{cite journal}}
: Cite journal requires |journal=
(help)