Polanyi's paradox

Last updated
Professor Michael Polanyi on a hike in England Michael Polanyi.png
Professor Michael Polanyi on a hike in England

Polanyi's paradox, named in honour of the British-Hungarian philosopher Michael Polanyi, is the theory that human knowledge of how the world functions and of our own capability are, to a large extent, beyond our explicit understanding. The theory was articulated by Michael Polanyi in his book The Tacit Dimension in 1966, and economist David Autor gave it a name in his 2014 research paper "Polanyi's Paradox and the Shape of Employment Growth". [1]

Contents

Summarised in the slogan "We can know more than we can tell", Polanyi's paradox is mainly to explain the cognitive phenomenon that there exist many tasks which we, human beings, understand intuitively how to perform but cannot verbalize their rules or procedures. [2]

This "self-ignorance" is common to many human activities, from driving a car in traffic to face recognition. [3] As Polanyi argues, humans are relying on their tacit knowledge, which is difficult to adequately express by verbal means, when engaging these tasks. [2] Polanyi's paradox has been widely considered to identify a major obstacle in the fields of AI and automation, since programming an automated task or system is difficult unless a complete and fully specific description of the procedure is available. [4]

Origins

British-Hungarian philosopher Michael Polanyi regularly studied the causes behind human ability to acquire knowledge that they cannot explain through logical deduction. In his work The Tacit Dimension (1966), Polanyi explored the 'tacit' dimension to human knowledge and developed the concept of "tacit knowledge", as opposed to the term "explicit knowledge". [2]

Tacit knowledge can be defined as knowledge people learn from experiences and internalize unconsciously, which is therefore difficult to articulate and codify it in a tangible form. Explicit knowledge, the opposite of tacit knowledge, is knowledge that can be readily verbalized and formalized. [2] Tacit knowledge is largely acquired through implicit learning, the process by which information is learned independently of the subjects' awareness. For example, native speakers tacitly acquire their language in early childhood without consciously studying specific grammar rules (explicit knowledge), but with extensive exposure to day-to-day communication. [5] Besides, people can only limitedly transfer their tacit knowledge through close interactions (sharing experiences with one another or observing others' behaviors). A certain level of trust needs to be established between individuals to capture tacit knowledge. [6]

Tacit knowledge comprises a range of conceptual and sensory information that is featured with strong personal subjectivity. It is implicitly reflected in human actions; as argued by Polanyi, "tacit knowledge dwells in our awareness". [2] People's skills, experiences, insight, creativity and judgement all fall into this dimension. [7] Tacit knowledge can also be described as know-how, distinguishing from know-that or facts. [6] Before Polanyi, Gilbert Ryle published a paper in 1945 drawing the distinction between knowing-that (knowledge of proposition) and knowing-how. According to Ryle, this know-how knowledge is the instinctive and intrinsic knowledge ingrained in the individual's human capability. [8]

Since tacit knowledge cannot be stated in propositional or formal form, Polanyi concludes such inability in articulation in the slogan ‘We can know more than we can tell’. [2] Daily activities based on tacit knowledge include recognizing a face, driving a car, riding a bike, writing a persuasive paragraph, developing a hypothesis to explain a poorly understood phenomenon. [7] Take facial recognition as an illustration: we can recognize our acquaintance's face out of a million others while we are not conscious about the knowledge of his face. It would be difficult for us to describe the precise arrangement of his eyes, nose and mouth, since we memorize the face unconsciously. [4]

As a prelude to The Tacit Dimension, in his book Personal Knowledge (1958), Polanyi claims that all knowing is personal, emphasizing the profound effects of personal feelings and commitments on the practice of science and knowledge. Arguing against the then dominant Empiricists view that minds and experiences are reducible to sense data and collections of rules, he advocates a post-positivist approach that recognizes human knowledge is often beyond their explicit expression. Any attempt to specify tacit knowing only leads to self-evident axioms that cannot tell us why we should accept them. [9]

Implications

Polanyi's observation has deep implications in the AI field since the paradox he identified that "our tacit knowledge of how the world works often exceeds our explicit understanding" accounts for many of the challenges for computerization and automation over the past five decades. [1] Automation requires high levels of exactness to inform the computer what is supposed to be done while tacit knowledge cannot be conveyed in a propositional form. Therefore, machines cannot provide successful outcomes in many cases: they have explicit knowledge (raw data) but nevertheless, do not know how to use such knowledge to understand the task as whole. [6] This discrepancy between human reasoning and AI learning algorithms makes it difficult to automate tasks that demand common sense, flexibility, adaptability and judgment — human intuitive knowledge. [4]

MIT economist David Autor is one of the leading sceptics who doubt the prospects for machine intelligence. Despite the exponential growth in computational resources and the relentless pace of automation since the 1990s, Autor argues, Polanyi's paradox impedes modern algorithms to replace human labor in a range of skilled jobs. The extent of machine substitution of human labor, therefore, has been overestimated by journalists and expert commentators. [1] Although contemporary computer science strives for prevailing over Polanyi's paradox, the ever-changing, unstructured nature of some activities currently presents intimidating challenges for automation. Despite years of time and billions of investment spent on the development of self-driving cars and cleaning robots, these machine learning systems continue to struggle with their low adaptability and interpretability, from self-driving cars' inability to make an unexpected detour to cleaning robots' vulnerability to unmonitored pets or kids. [10] Instead, to let self-driving cars function optimally, we have to change current road infrastructure significantly, minimizing the need for human capabilities in the whole driving process. [11]

The increasing occupational polarisation in the past few decades —the growth of both high-paid, high-skill abstract jobs and lower-paid, low-skill manual jobs — has been a manifestation of Polanyi's paradox. According to Autor, there are two types of tasks proven stubbornly challenging for artificial intelligence (AI): abstract tasks that require problem-solving capabilities, intuition, creativity and persuasion on the one hand, and manual tasks demanding situational adaptability, visual recognition, language understanding, and in-person interactions on the other. Abstract tasks are characteristic of professional, managerial, and technical occupations, while service and laborer occupations involve many manual tasks (e.g. cleaning, lifting and throwing). These jobs tend to be complemented by machines rather than substituted. [1]

By contrast, as the price of computing power declines, computers extensively substitute for routine tasks that can be codified into clear sets of instructions, resulting in a dramatical decline in employment of routine task-­intensive jobs. [1] This polarization has resulted in a shrinking middle class across industrialized economies since many middle-income occupations in sales, office and administrative work and repetitive production work are task-­intensive. Moreover, the subsequent growth in income inequality and wealth disparity has recently emerged as a major socio-economic issue in developed countries. [12]

Criticism

Some technological optimists argue that recent advances in machine learning have overcome Polanyi's paradox. Instead of relying on programmer’s algorithms to instruct them in human knowledge, computer systems are now able to learn tacit rules from context, abundant data, and applied statistics on their own. Since machines can infer the tacit knowledge that human beings draw upon from examples without human assistance, they are no longer limited by those rules tacitly applied but not explicitly understood by humans. [13]

Lee Sedol (B) vs AlphaGo (W) - Game 1 Lee Sedol (B) vs AlphaGo (W) - Game 1.jpg
Lee Sedol (B) vs AlphaGo (W) - Game 1

AlphaGo program built by the Google subsidiary DeepMind is an example of how advances in AI have allowed mindless machines to perform very well in tasks based on tacit knowledge. In the 2016 tournament of the strategy game Go, DeepMind's AlphaGo program successfully defeated one of the world's top GO players, Lee Se-dol, four games to one. DeepMind team employed an approach known as deep learning to build human-type judgment into AI systems; such a system can figure out complex winning strategies by analyzing large amounts of data from previous Go matches. [3]

On the other hand, as Carr argues, the assumption that computers need to be able to reproduce the tacit knowledge humans would apply to perform complicated tasks is itself open to doubt. When performing tasks, it is not at all necessary for systems and machines to follow the rules that human beings follow. The goal in having a machine perform a task is to replicate our outcomes for practical purposes, rather than our means. [14]

Jerry Kaplan, a Silicon Valley entrepreneur and AI expert, also illustrates this point in his book Humans Need Not Apply by discussing four resources and capabilities required to accomplish any given task: awareness, energy, reasoning and means. Humans' biological system (the brain-body complex) naturally integrates all these four properties, while in the electronic domain machines can be given these abilities by developments in robotics, machine learning, and perception powering systems. For example, data provided by a wide network of sensors enable AI to perceive various aspects of the environment and respond instantly in chaotic and complex real-world situations (i.e. awareness); orders and signals for actuating devices can be centralised and managed in server clusters or on the 'cloud' (reasoning). [15] Kaplan's argument directly supports the proposition that Polanyi's paradox can no longer impede further levels of automation, whether in performing routine jobs or manual jobs. As Kaplan puts it, "Automation is blind to the colour of your collar." [15]

One example confirms Kaplan's argument is the introduction of Cloud AutoML, an automated system that could help every business design AI software, by Google Brain AI research group in 2017. The learning algorithms of AutoML automates the process of building machine-learning models that can take on a particular task, aiming to democratize AI to the largest possible community of developers and businesses. According to Google’s CEO, Cloud AutoML has taken over some of the work of programmers (which is, in the words of Autor, "abstract task") and thereby offered one solution to the shortage in machine-learning experts. [16]

Moravec's paradox

Moravec’s paradox claims that compared with sophisticated tasks demanding high-level reasoning, it is harder for computers to master low-level physical and cognitive skills that are natural and easy for humans to perform. Examples include natural language processing and dextrous physical movements (e.g. running over rough terrain). [17]

Robotics experts have, accordingly, found it difficult to automate the skills of even the least-trained manual worker, since these jobs require perception and mobility (tacit knowledge). [17] In the words of the cognitive scientist Steven Pinker from his book The Language Instinct, "The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard." [18]

Corresponding to David Autor's discussion on jobs polarization, Pinker maintains that the appearance of the new generation's intelligent machines would place stock analysts, petrochemical engineers and parole board members in danger of being replaced. Gardeners, receptionists, and cooks are, by contrast, currently secure. [18]

Plato's Problem

Plato's Problem is the term given by Noam Chomsky to "the problem of explaining how we can know so much" given our limited experience.

Poverty of the stimulus (POS)

Poverty of the stimulus (POS) is the argument from linguistics that children are not exposed to rich enough data within their linguistic environments to acquire every feature of their language.

Meno’s Paradox

Meno’s Paradox can be formulated as follows:

  1. If you know what you’re looking for, inquiry is unnecessary.
  2. If you don’t know what you’re looking for, inquiry is impossible.
  3. Therefore, inquiry is either unnecessary or impossible.

There is within these arguments an implicit premise that either you know what you’re looking for or you don’t know what you’re looking for.

The Learning Paradox

The Learning Paradox (Fodor 1980) holds that knowledge can either not be new, or not be learned. If new knowledge can be expressed in terms of old knowledge, it is not new. If it cannot be expressed in terms of old knowledge, it cannot be understood. Therefore, learning something genuinely novel is impossible and all essential structures must be present at birth.

Related Research Articles

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or other animals. It is a field of study in computer science that develops and studies intelligent machines. Such machines may be called AIs.

<span class="mw-page-title-main">Michael Polanyi</span> Hungarian-British polymath (1891–1976)

Michael Polanyi was a Hungarian-British polymath, who made important theoretical contributions to physical chemistry, economics, and philosophy. He argued that positivism is a false account of knowing.

<span class="mw-page-title-main">Automation</span> Use of various control systems for operating equipment

Automation describes a wide range of technologies that reduce human intervention in processes, mainly by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines. Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices, and computers, usually in combination. Complicated systems, such as modern factories, airplanes, and ships typically use combinations of all of these techniques. The benefit of automation includes labor savings, reducing waste, savings in electricity costs, savings in material costs, and improvements to quality, accuracy, and precision.

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Recently, generative artificial neural networks have been able to surpass many previous approaches in performance.

Tacit knowledge or implicit knowledge—as opposed to formalized, codified or explicit knowledge—is knowledge that is difficult to express or extract; therefore it is more difficult to transfer to others by means of writing it down or verbalizing it. This can include motor skills, personal wisdom, experience, insight, and intuition.

Procedural knowledge is the knowledge exercised in the performance of some task. Unlike descriptive knowledge, which involves knowledge of specific facts or propositions, procedural knowledge involves one's ability to do something. A person doesn't need to be able to verbally articulate their procedural knowledge in order for it to count as knowledge, since procedural knowledge requires only knowing how to correctly perform an action or exercise a skill.

<span class="mw-page-title-main">Symbolic artificial intelligence</span> Methods in artificial intelligence research

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

Experiential knowledge is knowledge gained through experience, as opposed to a priori knowledge: it can also be contrasted both with propositional (textbook) knowledge, and with practical knowledge.

Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

An artificial general intelligence (AGI) is a hypothetical type of intelligent agent which, if realized, could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks.Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Business process automation (BPA), also known as business automation,distinguished from Business Process Management (BPM), is the technology-enabled automation of business processes. It can help a business in simplicity, to increase digital transformation, increase service quality, improve service delivery, or contain costs. BPA consists of integrating applications, restructuring labor resources, and using software applications throughout the organization. Robotic process automation is an emerging field within BPA.

<span class="mw-page-title-main">Intelligent agent</span> Software agent which acts autonomously

In artificial intelligence, an intelligent agent (IA) is an agent acting in an intelligent manner; It perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge. An intelligent agent may be simple or complex: A thermostat or other control system is considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

<span class="mw-page-title-main">Hubert Dreyfus's views on artificial intelligence</span> Overview of Hubert Dreyfuss views on artificial intelligence

Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI (1965), What Computers Can't Do and Mind over Machine (1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2021), a standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.

Moravec's paradox is the observation in artificial intelligence and robotics that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".

Post-critical is a term coined by scientist-philosopher Michael Polanyi (1891–1976) in the 1950s to designate a position beyond the critical philosophical orientation. In this context, "the critical mode" designates a way of relating to reality that was initiated in the years preceding the Enlightenment period and since then has become the predominant intellectual mode of Modernity. Polanyi's ideas in this regard were extended in the 1960s and thereafter by William H. Poteat (1919–2000), drawing upon and combining in new ways certain ideas of seminal critics of culture since the Enlightenment such as Pascal, Kierkegaard, Arendt, Wittgenstein, and Merleau-Ponty. Those ideas were further extended by several of Poteat's students and by other members of the Polanyi Society.

Lawbots are a broad class of customer-facing legal AI applications that are used to automate specific legal tasks, such as document automation and legal research. The terms robot lawyer and lawyer bot are used as synonyms to lawbot. A robot lawyer or a robo-lawyer refers to a legal AI application that can perform tasks that are typically done by paralegals or young associates at law firms. However, there is some debate on the correctness of the term. Some commentators say that legal AI is technically speaking neither a lawyer nor a robot and should not be referred to as such. Other commentators believe that the term can be misleading and note that the robot lawyer of the future won't be one all-encompassing application but a collection of specialized bots for various tasks.

Automated machine learning (AutoML) is the process of automating the tasks of applying machine learning to real-world problems.

Automated Artificial Intelligence (AutoAI) is a variation of the automated machine learning or AutoML technology, which extends the automation of model building towards automation of the full life cycle of a machine learning model. It applies intelligent automation to the task of building predictive machine learning models by preparing data for training and identifying the best type of model for the given data. then choosing the features or columns of data that best support the problem the model is solving. Finally, automation evaluates a variety of tuning options to reach the best result as it generates, then ranks, model-candidate pipelines. The best performing pipelines can be put into production to process new data, and deliver predictions based on the model training. Automated artificial intelligence can also be applied to making sure the model doesn't have inherent bias and automating the tasks for continuous improvement of the model. Managing an AutoAI model requires frequent monitoring and updating, managed by a process known as model operations or ModelOps.

References

  1. 1 2 3 4 5 Autor, David (2014), Polanyi's Paradox and the Shape of Employment Growth, NBER Working Paper Series, Cambridge, MA: National Bureau of Economic Research, pp. 1–48
  2. 1 2 3 4 5 6 Polanyi, Michael (May 2009). The Tacit Dimension. Chicago: University of Chicago Press. pp. 1–26. ISBN   9780226672984. OCLC   262429494.
  3. 1 2 McAfee, Andrew; Brynjolfsson, Erik (16 March 2016). "Where Computers Defeat Humans, and Where They Can't". The New York Times. Archived from the original on 16 October 2018. Retrieved 2018-10-04.
  4. 1 2 3 Walsh, Toby (September 7, 2017). Android Dreams: the Past, Present and Future of Artificial Intelligence. London: C Hurst & Co Publishers Ltd. pp. 89–97. ISBN   9781849048712. OCLC   985805795.
  5. Reber, Arthur (September 1989). "Implicit Learning and Tacit Knowledge". Journal of Experimental Psychology: General. 118 (3): 219–235. CiteSeerX   10.1.1.207.6707 . doi:10.1037/0096-3445.118.3.219.
  6. 1 2 3 Asanarong, Thanathorn; Jeon, Sowon; Ren, Yuanlin; Yeo, Christopher (18 December 2018). "Creating a Knowledge Management Culture for Ganga River". Ganga Rejuvenation : Governance Challenges and Policy Options. Wu Xun, Robert James Wasson, Ora-Orn Poocharoen. New Jersey: World Scientific. p. 349. ISBN   9789814704588. OCLC   1013819475.
  7. 1 2 Chugh, Ritesh (2015), "Do Australian Universities Encourage Tacit Knowledge Transfer", Proceedings of the 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, Lisbon, PT: 128–135
  8. Ryle, Gilbert (1945). "Knowing How and Knowing That: The Presidential Address". Proceedings of the Aristotelian Society. 46: 1–16. doi:10.1093/aristotelian/46.1.1. JSTOR   4544405.
  9. Polanyi, Michael (1974). Personal Knowledge : Towards a Post-critical Philosophy. Chicago: University Of Chicago Press. ISBN   978-0226672885. OCLC   880960082.
  10. Prassl, Jeremias (2018). Humans as a Service : the Promise and Perils of Work in the Gig Economy. Oxford: Oxford University Press. pp. 138–139. ISBN   9780198797012. OCLC   1005117556.
  11. Badger, Emily (January 15, 2015). "5 Confounding Questions that Hold the Key to the Future of Driverless Cars". Washington Post. Archived from the original on August 25, 2018. Retrieved 2018-10-31.
  12. Vardi, Moshe (February 2015). "Is Information Technology Destroying the Middle Class?". Communications of the ACM. 58 (2): 5. doi: 10.1145/2666241 .
  13. Susskind, Daniel (2017), Re-Thinking the Capabilities of Machines in Economics (PDF), University of Oxford Department of Economics Discussion Paper Series, Oxford, OX, pp. 1–14, archived (PDF) from the original on 2017-06-26, retrieved 2018-10-04{{citation}}: CS1 maint: location missing publisher (link)
  14. Carr, Nicholas (September 29, 2014). The Glass Cage: Automation and Us (First ed.). New York: W. W. Norton & Company. pp. 11–12. ISBN   9780393240764. OCLC   870098283.
  15. 1 2 Kaplan, Jerry (August 4, 2015). Humans Need Not Apply : a Guide to Wealth and Work in the Age of Artificial Intelligence. New Haven: Yale University Press. pp. 41–43, 145. ISBN   9780300223576. OCLC   907143085.
  16. Simonite, Tom (May 17, 2017). "Google's CEO is excited about seeing AI take over some work of his AI experts". MIT Technology Review. Retrieved 2018-10-30.
  17. 1 2 Brynjolfsson, Erik; McAfee, Andrew (January 20, 2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (First ed.). New York: W. W. Norton & Company. pp. 47–50. ISBN   9780393239355. OCLC   867423744.
  18. 1 2 Pinker, Steven (1994). The Language Instinct: How the Mind Creates Language. New York: William Morrow and Company. ISBN   978-0688121419. OCLC   28723210.