Positive computing

Last updated

Positive computing is a technological design perspective that embraces psychological well-being and ethical practice, aiming at building a digital environment to support happier and healthier users. Positive computing develops approaches that integrate insights from psychology, education, neuroscience, and HCI with technological development. [1] [2] The purpose of positive computing is to bridge the technology and mental health worlds. [3] Indeed, there are computer and mental health workshops that are aimed to bring people from both communities together. [4]

Contents

Everyone who uses technology is impacted by the way the tool is designed and even if most technologies may have small effects, they still apply to huge populations. [5] [3]

Background

Well-being in psychology

Technology researchers typically focus primarily on technical aspects, paying less attention to the ethical impact and ethical considerations of their products. [6] However, researchers from other fields such as psychology and philosophy studied these matters extensively and provided a wealth of methodologies to assess users' well-being, with thousands of quality-of-life assessment methods and validating studies. [7] [8]

Positive computing draws many ideas from positive psychology, a domain of psychology that focuses on societal well-being and improving quality of life.

Well-being in technology and technology research

The recognition of the impact of technology and inventions on people's lives [5] has moved technology professionals to rethink the technology tools we use and seek a realignment of companies' goals to the social good. Exemplary of this disposition is the famous Google's motto, "don't be evil." [9]

Technologies can be loosely classified into four groups according to their influence on the psychological aspects: [3]

What is positive

In Calvo's and Peter's seminal book on positive computing, [10] they list the following as positive aspects to which we should aim when designing technologies: positive emotions, motivation, engagement, flow, self-awareness, self-compassion, mindfulness, empathy, compassion, and altruism. An encompassing term for general human welfare and happiness is eudaimonia which is extensively studied in positive psychology [11] and which is inquired along different dimensions such as self-discovery, the sense of purpose and meaning in life, the involvement in activities, the investment in the pursuit of excellence, the self-perception of one's own potentials. [12]

Autonomy, competence and relatedness

There are three basic psychological needs according to Self-determination theory (SDT): autonomy, competence, and relatedness, which can be briefly described as the feeling of psychological liberty and self-motivation, the feeling of having control and mastery, and the feeling of connection to others.

Solutions

Design to address the basic psychological needs

The three previously mentioned basic psychological needs are measurable and well-defined characteristics that make them excellent as design targets. [13]

To support autonomy, the design process needs to provide control over multiple options, provide meaningful rationales behind choices, enable the customization of the experience, and avoid controlling language. [14] [13]

Competence is also well-studied for game design, and the three main design factors supporting it are the appropriateness of the level of presented challenges, the presence of positive feedback, and the opportunities to learn and master the tasks at hand. [14] [15] [13]

Relatedness-supportive environments need to be designed to provide meaningful and responsive interactions with others, respect human emotions, avoid disrupting social relationships, and provide opportunities for social connections. [16] [13]

Responsible design process

Infographic describing the responsible design process in its main components: discover, defined, develop, deliver, evaluate The responsible design process.png
Infographic describing the responsible design process in its main components: discover, defined, develop, deliver, evaluate

Responsible design, not to be confused with responsive design, comes from the integration of ethical analysis with well-beingsupportive design into engineering practice. [17] In particular, it features the double diamond design process model adding a post-launch evaluation phase. The responsible design process consists then of five stages: [18]

Positive Computing in Artificial Intelligence

The rise of artificial intelligence

Over the past half-century, artificial intelligence has grown rapidly in terms of both computational power, application, and mainstream usage. As written by Zhongzhi Shi, and observed by many others, "Artificial Intelligence attempts simulation, extension and expansion of human intelligence using artificial methodology and technology." [19]

Superintelligence possibility

A possible outcome of future computer science and computer engineering research is an Intelligence explosion. I. J. Good described the first superintelligent machine as "the last invention that man need ever make," because of the vast influence it would have on our species. [20] Indeed, Nick Bostrom, in his book Superintelligence: Paths, Dangers, Strategies, proposes the common good principle according to which superintelligence should be developed only for the benefit of all and based on widely shared ethical ideals. [21]

Potential solutions

Malo Bourgon, COO of MIRI, stated that the AI community should consider best practices from the computer security community when testing their systems for safety and security before they are released for wide adoption. [22] Government legislation, business practices, and stronger education of AI and its consequences to society are also proposed. [23] These solutions implement the principles of positive computing into AI, making sure that it serves humanity in a positive way.

Scientific venues

See also

Related Research Articles

Hedonism is a family of philosophical views that prioritize pleasure. Psychological hedonism is the theory that the underlying motivation of all human behavior is to maximize pleasure and avoid pain. As a form of egoism, it suggests that people only help others if they expect a personal benefit. Axiological hedonism is the view that pleasure is the sole source of intrinsic value. It asserts that other things, like knowledge and money, only have value insofar as they produce pleasure and reduce pain. This view divides into quantitative hedonism, which only considers the intensity and duration of pleasures, and qualitative hedonism, which holds that the value of pleasures also depends on their quality. The closely related position of prudential hedonism states that pleasure and pain are the only factors of well-being. Ethical hedonism applies axiological hedonism to morality, arguing that people have a moral obligation to pursue pleasure and avoid pain. Utilitarian versions assert that the goal is to increase overall happiness for everyone, whereas egoistic versions state that each person should only pursue their own pleasure. Outside the academic context, hedonism is a pejorative term for an egoistic lifestyle seeking short-term gratification.

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.

Persuasive technology is broadly defined as technology that is designed to change attitudes or behaviors of the users through persuasion and social influence, but not necessarily through coercion. Such technologies are regularly used in sales, diplomacy, politics, religion, military training, public health, and management, and may potentially be used in any area of human-human or human-computer interaction. Most self-identified persuasive technology research focuses on interactive, computational technologies, including desktop computers, Internet services, video games, and mobile devices, but this incorporates and builds on the results, theories, and methods of experimental psychology, rhetoric, and human-computer interaction. The design of persuasive technologies can be seen as a particular case of design with intent.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Computer ethics is a part of practical philosophy concerned with how computing professionals should make decisions regarding professional and social conduct.

Weak artificial intelligence is artificial intelligence that implements a limited part of the mind, or, as narrow AI, is focused on one narrow task.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

<span class="mw-page-title-main">Well-being</span> General term for condition of individual or group

Well-being, or wellbeing, also known as wellness, prudential value, prosperity or quality of life, is what is intrinsically valuable relative to someone. So the well-being of a person is what is ultimately good for this person, what is in the self-interest of this person. Well-being can refer to both positive and negative well-being. In its positive sense, it is sometimes contrasted with ill-being as its opposite. The term "subjective well-being" denotes how people experience and evaluate their lives, usually measured in relation to self-reported well-being obtained through questionnaires.

Humanity is a virtue linked with altruistic ethics derived from the human condition. It signifies human love and compassion towards each other. Humanity differs from mere justice in that there is a level of altruism towards individuals included in humanity more so than in the fairness found in justice. That is, humanity, and the acts of love, altruism, and social intelligence are typically individual strengths while fairness is generally expanded to all. Humanity is one of six virtues that are consistent across all cultures.

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

Darcia Narvaez is a Professor of Psychology Emerita at the University of Notre Dame who has written extensively on issues of character, moral development, and human flourishing.

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal directed beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

<span class="mw-page-title-main">Behavioural design</span> Field of design concerned with the influence of design on behavior

Behavioural design is a sub-category of design, which is concerned with how design can shape, or be used to influence human behaviour. All approaches of design for behaviour change acknowledge that artifacts have an important influence on human behaviour and/or behavioural decisions. They strongly draw on theories of behavioural change, including the division into personal, behavioural, and environmental characteristics as drivers for behaviour change. Areas in which design for behaviour change has been most commonly applied include health and wellbeing, sustainability, safety and social context, as well as crime prevention.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">Risk of astronomical suffering</span> Risks of astronomical suffering

Risks of astronomical suffering, also called suffering risks or s-risks, are risks involving much more suffering than all that has occurred on Earth so far. They are sometimes categorized as a subclass of existential risks.

References

Notes

  1. Calvo, A. Rafael; Peters, Dorian. Introduction to Positive Computing: Technology that Fosters Wellbeing. Conference on Human Factors in Computing Systems. doi:10.1145/2702613.2706674.
  2. "Positive Computing". Archived from the original on 28 June 2021. Retrieved 15 June 2021.
  3. 1 2 3 John Torous (19 September 2016). "Positive Computing and Designing for Mental Health". Psychiatric Times (Podcast). MJH Life Sciences. Retrieved 15 June 2021.
  4. "Computing and Mental Health | Symposium at CHI 2019". Archived from the original on 24 June 2021. Retrieved 18 June 2021.
  5. 1 2 Jasanoff, Sheila (30 August 2016). The Ethics of Invention: Technology and the Human Future (First ed.). New York, NY. p. W. W. Norton Company. ISBN   978-0-393-07899-2. Archived from the original on 18 June 2021. Retrieved 17 June 2021.{{cite book}}: CS1 maint: location missing publisher (link)
  6. Wolpe, Paul Root (2006). "Reasons Scientists Avoid Thinking about Ethics". Cell. 125 (6): 1023–1025. doi: 10.1016/j.cell.2006.06.001 . ISSN   0092-8674. PMID   16777590. S2CID   33170314.
  7. Pequeno, Nila Patrícia Freire; Cabral, Natália Louise de Araújo; Marchioni, Dirce Maria; Lima, Severina Carla Vieira Cunha; Lyra, Clélia de Oliveira (2020). "Quality of life assessment instruments for adults: a systematic review of population-based studies". Health and Quality of Life Outcomes. 18 (1): 208. doi: 10.1186/s12955-020-01347-7 . ISSN   1477-7525. PMC   7329518 . PMID   32605649.
  8. "Definition, Measures, Applications, & Facts". Encyclopedia Britannica. Archived from the original on 2021-01-29. Retrieved 2021-03-06.
  9. Calvo & Peters 2014, Introduction.
  10. Calvo & Peters 2014.
  11. Nyabul, P. O., & Situma, J. W. (2014). The Meaning of Eudemonia in Aristotle’s Ethics. International Journal, 2(3), 65-74.
  12. Kjell, Oscar N. E. (2011). "Sustainable Well-Being: A Potential Synergy between Sustainability and Well-Being Research". Review of General Psychology. 15 (3): 255–266. doi:10.1037/a0024603. ISSN   1089-2680. S2CID   54685023. Archived from the original on 2023-08-13. Retrieved 2023-08-13.
  13. 1 2 3 4 Peters, Dorian (2020-08-06). "3 Keys to meaningful engagement & support for wellbeing in tech". Ethics of Digital Experience - Medium. Archived from the original on 2021-06-24. Retrieved 18 June 2021.
  14. 1 2 Peng, Wei; Lin, Jih-Hsuan; Pfeiffer, Karin A.; Winn, Brian (2012). "Need Satisfaction Supportive Game Features as Motivational Determinants: An Experimental Study of a Self-Determination Theory Guided Exergame". Media Psychology. 15 (2): 175–196. doi:10.1080/15213269.2012.673850. ISSN   1521-3269. S2CID   14534575.
  15. Ryan, Richard M.; Rigby, C. Scott; Przybylski, Andrew (2006). "The Motivational Pull of Video Games: A Self-Determination Theory Approach". Motivation and Emotion. 30 (4): 344–360. doi:10.1007/s11031-006-9051-8. ISSN   0146-7239. S2CID   53574707.
  16. Burke, Moira; Marlow, Cameron; Lento, Thomas (2010). "Social network activity and social well-being". Proceedings of the 28th international conference on Human factors in computing systems - CHI '10. p. 1909. doi:10.1145/1753326.1753613. ISBN   9781605589299. S2CID   207178564.
  17. "Responsible Design Process". Positive Computing. Archived from the original on 24 June 2021. Retrieved 18 June 2021.
  18. Peters, Dorian; Vold, Karina; Robinson, Diana; Calvo, Rafael A. (2020). "Responsible AI—Two Frameworks for Ethical Design Practice". IEEE Transactions on Technology and Society. 1 (1): 34–47. doi:10.1109/TTS.2020.2974991. hdl: 10044/1/77602 . ISSN   2637-6415. S2CID   212704361.
  19. Shi, Zhongzhi (1 August 2006). PROCEEDINGS OF 2006 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE. Peoples Republic of China: BEIJING UNIV POSTS TELECOMMUNICAT PRESS.
  20. Good, I. J. "Speculations Concerning the First Ultraintelligent Machine", Advances in Computers, vol. 6, 1965. Archived May 1, 2012, at the Wayback Machine
  21. Bostrom, Nick (2014). "14. The strategic picture". Superintelligence : paths, dangers, strategies (First ed.). Oxford, United Kingdom: Oxford University Press. ISBN   978-0199678112.
  22. "IEEE SA - Ethically Aligned Design, Version 1, Translations and Reports". IEEE . Version 1, Overview. Archived from the original on 24 June 2021. Retrieved 18 June 2021.{{cite web}}: CS1 maint: location (link)
  23. Kaplan, Andreas; Haenlein, Michael (January 2020). "Rulers of the world, unite! The challenges and opportunities of artificial intelligence". Business Horizons. 63 (1): 37–50. doi:10.1016/j.bushor.2019.09.003.


Bibliography

Further reading