Value sensitive design (VSD) is a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner. [1] [2] VSD originated within the field of information systems design [3] and human-computer interaction [4] to address design issues within the fields by emphasizing the ethical values of direct and indirect stakeholders. It was developed by Batya Friedman and Peter Kahn at the University of Washington starting in the late 1980s and early 1990s. Later, in 2019, Batya Friedman and David Hendry wrote a book on this topic called "Value Sensitive Design: Shaping Technology with Moral Imagination". [5] Value Sensitive Design takes human values into account in a well-defined matter throughout the whole process. [6] Designs are developed using an investigation consisting of three phases: conceptual, empirical and technological. [7] These investigations are intended to be iterative, allowing the designer to modify the design continuously. [8]
The VSD approach is often described as an approach that is fundamentally predicated on its ability to be modified depending on the technology, value(s), or context of use. [9] [10] Some examples of modified VSD approaches are Privacy by Design which is concerned with respecting the privacy of personally identifiable information in systems and processes. [11] Care-Centered Value Sensitive Design (CCVSD) proposed by Aimee van Wynsberghe is another example of how the VSD approach is modified to account for the values central to care for the design and development of care robots. [12]
VSD uses an iterative design process that involves three types of investigations: conceptual, empirical and technical. Conceptual investigations aim at understanding and articulating the various stakeholders of the technology, as well as their values and any values conflicts that might arise for these stakeholders through the use of the technology. Empirical investigations are qualitative or quantitative design research studies used to inform the designers' understanding of the users' values, needs, and practices. Technical investigations can involve either analysis of how people use related technologies, or the design of systems to support values identified in the conceptual and empirical investigations. [13] Friedman and Hendry account seventeen methods, including their main purpose, an overview of its function as well as key references: [5]
- Stakeholder Analysis (Purpose: Stakeholder identification and legitimation): Identification of individuals, groups, organizations, institutions, and societies that might reasonably be affected by the technology under investigation and in what ways. Two overarching stakeholder categories: (1) those who interact directly with the technology, direct stakeholders; and (2) those indirectly affected by the technology, indirect stakeholders. [14] [15] [16] [17]
- Stakeholder Tokens (Purpose: Stakeholder identification and interaction): Playful and versatile toolkit for identifying stakeholders and their interactions. Stakeholder tokens facilitate identifying stakeholders, distinguishing core from peripheral stakeholders, surfacing excluded stakeholders, and articulating relationships among stakeholders. [18]
- Value Source Analysis (Purpose: Identify value sources): Distinguish among the explicitly supported project values, designers’ personal values, and values held by other direct and indirect stakeholders. [19]
- Co-evolution of Technology and Social Structure (Purpose: Expand design space): Expanding the design space to include social structures integrated with technology may yield new solutions not possible when considering the technology alone. As appropriate, engage with the design of both technology and social structure as part of the solution space. Social structures may include policy, law, regulations, organizational practices, social norms, and others. [20] [21]
- Value Scenario (Purpose: Values representation and elicitation): Narratives, comprising stories of use, intended to surface human and technical aspects of technology and context. Value scenarios emphasize implications for direct and indirect stakeholders, related key values, widespread use, indirect impacts, longer-term use, and similar systemic effects. [15] [16] [21]
- Value Sketch (Purpose: Values representation and elicitation): Sketching activities as a way to tap into stakeholders’ non-verbal understandings, views, and values about a technology. [22] [23]
- Value-oriented Semi-structured Interview (Purpose: Values elicitation): Semi-structured interview questions as a way to tap into stakeholders’ understandings, views and values about a technology. Questions typically emphasize stakeholders’ evaluative judgments (e.g., all right or not all right) about a technology as well as rationale (e.g., why?). Additional considerations introduced by the stakeholder are pursued. [24] [10] [19] [25] [16] [26]
- Scalable Information Dimensions (Purpose: Values elicitation): Sets of questions constructed to tease apart the impact of pervasiveness, proximity, granularity of information, and other scalable dimensions. Can be used in interview or survey formats. [24] [14] [27]
- Value-oriented Coding Manual (Purpose: Values analysis): Hierarchically structured categories for coding qualitative responses to the value representation and elicitation methods. Coding categories are generated from the data and a conceptualization of the domain. Each category contains a label, definition, and typically up to three sample responses from empirical data. Can be applied to oral, written, and visual responses.
- Value-oriented Mockup, Prototype or Field Deployment (Purpose: Values representation and elicitation): Development, analysis, and co-design of mockups, prototypes and field deployments to scaffold the investigation of value implications of technologies that are yet to be built or widely adopted. Mock-ups, prototypes or field deployments emphasize implications for direct and indirect stakeholders, value tensions, and technology situated in human contexts. [25] [28] [29] [16] [30]
- Ethnographically Informed Inquiry regarding Values and Technology (Purpose: Values, technology and social structure framework and analysis): Framework and approach for data collection and analysis to uncover the complex relationships among values, technology and social structure as those relationships unfold. Typically involves indepth engagement in situated contexts over longer periods of time. [31]
- Model for Informed Consent Online (Purpose: Design principles and values analysis): Model with corresponding design principles for considering informed consent in online contexts. The construct of informed encompasses disclosure and comprehension; that of consent encompasses voluntariness, competence, and agreement. Furthermore, implementations of informed consent. [32]
- Value Dams and Flows (Purpose: Values analysis): Analytic method to reduce the solution space and resolve value tensions among design choices. First, design options that even a small percentage of stakeholders strongly object to are removed from the design space—the value dams. Then of the remaining design options, those that a good percentage of stakeholders find appealing are foregrounded in the design—the value flows. Can be applied to the design of both technology and social structures. [16] [21] [29]
- Value Sensitive Action-Reflection Model (Purpose: Values representation and elicitation): Reflective process for introducing value sensitive prompts into a co-design activity. Prompts can be designer or stakeholder generated. [30]
- Multi-lifespan timeline (Purpose: Priming longer-term and multi-generational design thinking): Priming activity for longer-term design thinking. Multi-lifespan timelines prompt individuals to situate themselves in a longer time frame relative to the present, with attention to both societal and technological change. [33]
- Multi-lifespan co-design (Purpose: Longer-term design thinking and envisioning): Co-design activities and processes that emphasize longer-term anticipatory futures with implications for multiple and future generations. [33]
- Envisioning Cards (Purpose: Value sensitive design toolkit for industry, research, and educational practice): Value sensitive envisioning toolkit. A set of 32 cards, the Envisioning Cards build on four criteria: stakeholders, time, values, and pervasiveness. Each card contains on one side a title and an evocative image related to the card theme; on the flip side, the envisioning criterion, card theme, and a focused design activity. Envisioning Cards can be used for ideation, co-design, heuristic critique, and evaluation. [34] [35] [30] The second edition of the Envision Cards was published online under the CC BY-NC-ND 4.0 license in 2024. [36] This second edition of the Envisioning Cards brings together under one cohesive design the original set of 32 Envisioning Cards published in 2011 including the four suits—Stakeholders, Time, Values, and Pervasiveness—with the supplementary set of 13 Envisioning Cards published in 2018 with the Multi-lifespan suit.
VSD is not without its criticisms. Two commonly cited criticisms are critiques of the heuristics of values on which VSD is built. [37] [38] These critiques have been forwarded by Le Dantec et al. [39] and Manders-Huits. [40] Le Dantec et al. argue that formulating a pre-determined list of implicated values runs the risk of ignoring important values that can be elicited from any given empirical case by mapping those value a priori. [39] Manders-Huits instead takes on the concept of ‘values’ itself with VSD as the central issue. She argues that the traditional VSD definition of values as “what a person or group of people consider important in life” is nebulous and runs the risk of conflating stakeholders preferences with moral values. [40]
Wessel Reijers and Bert Gordijn have built upon the criticisms of Le Dantec et alia and Manders-Huits that the value heuristics of VSD are insufficient given their lack of moral commitment. [38] They propose that a heuristic of virtues stemming from a virtue ethics approach to technology design, mostly influenced by the works of Shannon Vallor, provides a more holistic approach to technology design. Steven Umbrello has criticized this approach arguing that not only can the heuristic of values be reinforced [41] but that VSD does make moral commitments to at least three universal values: human well-being, justice and dignity. [37] [5] Batya Friedman and David Hendry, in "Value Sensitive Design: Shaping Technology with Moral Imagination", argue that although earlier iterations of the VSD approach did not make explicit moral commitments, it has since evolved over the past two decades to commit to at least those three fundamental values. [5]
VSD as a standalone approach has also been criticized as being insufficient for the ethical design of artificial intelligence. [42] This criticism is predicated on the self-learning and opaque artificial intelligence techniques like those stemming from machine learning and, as a consequence, the unforeseen or unforeseeable values or disvalues that may emerge after the deployment of an AI system. Steven Umbrello and Ibo van de Poel propose a modified VSD approach that uses the Artificial Intelligence for Social Good (AI4SG) [43] factors as norms to translate abstract philosophical values into tangible design requirements. [44] What they propose is that full-lifecycle monitoring is necessary to encourage redesign in the event that unwanted values manifest themselves during the deployment of a system.
In human–computer interaction, WIMP stands for "windows, icons, menus, pointer", denoting a style of interaction using these elements of the user interface. Other expansions are sometimes used, such as substituting "mouse" and "mice" for menus, or "pull-down menu" and "pointing" for pointer.
Robotic pets are artificially intelligent machines that are made to resemble actual pets. While the first robotic pets produced in the late 1990s were not too advanced, they have since grown technologically. Many now use machine learning, making them much more realistic. Most consumers buy robotic pets with the aim of getting similar companionship that biological pets offer, without some of the drawbacks that come with caring for live animals. The pets on the market currently have a wide price range, from the low hundreds into the several thousands of dollars. Multiple studies have been done to show that we treat robotic pets in a similar way as actual pets, despite their obvious differences. However, there is some controversy regarding how ethical using robotic pets is, and whether or not they should be widely adopted in elderly care.
The Special Interest Group on Computer–Human Interaction (SIGCHI) is one of the Association for Computing Machinery's special interest groups which is focused on human–computer interactions (HCI).
Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".
Steve Whittaker is a Professor in human-computer interaction at the University of California Santa Cruz. He is best known for his research at the intersection of computer science and social science in particular on computer mediated communication and personal information management. He is a Fellow of the Association for Computing Machinery (ACM), and winner of the CSCW 2018 "Lasting Impact" award. He also received a Lifetime Research Achievement Award from SIGCHI, is a Member of the SIGCHI Academy. He is Editor of the journal Human-Computer Interaction.
Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.
Jean-Daniel Fekete is a French computer scientist.
Alice Jane Brush is an American computer scientist known for her research in human-computer interaction, ubiquitous computing and computer supported collaborative work (CSCW). She is particularly known for her research studying and building technology for homes as well as expertise conducting field studies of technology. She is the co-chair of CRA-W from 2014 to 2017.
Design fiction is a design practice aiming at exploring and criticising possible futures by creating speculative, and often provocative, scenarios narrated through designed artifacts. It is a way to facilitate and foster debates, as explained by futurist Scott Smith: "... design fiction as a communication and social object creates interactions and dialogues around futures that were missing before. It helps make it real enough for people that you can have a meaningful conversation with".
Positive computing is a technological design perspective that embraces psychological well-being and ethical practice, aiming at building a digital environment to support happier and healthier users. Positive computing develops approaches that integrate insights from psychology, education, neuroscience, and HCI with technological development. The purpose of positive computing is to bridge the technology and mental health worlds. Indeed, there are computer and mental health workshops that are aimed to bring people from both communities together.
Projector-camera systems (pro-cam), also called camera-projector systems, augment a local surface with a projected captured image of a remote surface, creating a shared workspace for remote collaboration and communication. Projector-camera systems may also be used for artistic and entertainment purposes. A pro-cam system consists of a vertical screen for implementing interpersonal space where front-facing videos are displayed, and a horizontal projected screen on the tabletop for implementing shared workspace where downward facing videos are overlapped. An automatically pre-warped image is sent to the projector to ensure that the horizontal screen appears undistorted.
Abigail Jane Sellen is a Canadian cognitive scientist, industrial engineer, and computer scientist who works for Microsoft Research in Cambridge. She is also an honorary professor at the University of Nottingham and University College London.
Animal–computer interaction (ACI) is a field of research for the design and use of technology with, for and by animals covering different kinds of animals from wildlife, zoo and domesticated animals in different roles. It emerged from, and was heavily influenced by, the discipline of Human–computer interaction (HCI). As the field expanded, it has become increasingly multi-disciplinary, incorporating techniques and research from disciplines such as artificial intelligence (AI), requirements engineering (RE), and veterinary science.
Feminist HCI is a subfield of human-computer interaction (HCI) that applies feminist theory, critical theory and philosophy to social topics in HCI, including scientific objectivity, ethical values, data collection, data interpretation, reflexivity, and unintended consequences of HCI software. The term was originally used in 2010 by Shaowen Bardzell, and although the concept and original publication are widely cited, as of 2020 Bardzell's proposed frameworks have been rarely used since.
Wendy Elizabeth Mackay is a Canadian researcher specializing in human-computer interaction. She has served in all of the roles on the SIGCHI committee, including Chair. She is a member of the CHI Academy and a recipient of a European Research Council Advanced grant. She has been a visiting professor in Stanford University between 2010 and 2012, and received the ACM SIGCHI Lifetime Service Award in 2014.
Joëlle Coutaz is a French computer scientist, specializing in human-computer interaction (HCI). Her career includes research in the fields of operating systems and HCI, as well as being a professor at the University of Grenoble. Coutaz is considered a pioneer in HCI in France, and in 2007, she was awarded membership to SIGCHI. She was also involved in organizing CHI conferences and was a member on the editorial board of ACM Transactions on Computer-Human Interaction.
Andrew Cockburn is currently working as a Professor in the Department of Computer Science and Software Engineering at the University of Canterbury in Christchurch, New Zealand. He is in charge of the Human Computer Interactions Lab where he conducts research focused on designing and testing user interfaces that integrate with inherent human factors.
Shumin Zhai is a Chinese-born American Canadian Human–computer interaction (HCI) research scientist and inventor. He is known for his research specifically on input devices and interaction methods, swipe-gesture-based touchscreen keyboards, eye-tracking interfaces, and models of human performance in human-computer interaction. His studies have contributed to both foundational models and understandings of HCI and practical user interface designs and flagship products. He previously worked at IBM where he invented the ShapeWriter text entry method for smartphones, which is a predecessor to the modern Swype keyboard. Dr. Zhai's publications have won the ACM UIST Lasting Impact Award and the IEEE Computer Society Best Paper Award, among others, and he is most known for his research specifically on input devices and interaction methods, swipe-gesture-based touchscreen keyboards, eye-tracking interfaces, and models of human performance in human-computer interaction. Dr. Zhai is currently a Principal Scientist at Google where he leads and directs research, design, and development of human-device input methods and haptics systems.
Tawanna Dillahunt is an American computer scientist and information scientist based at the University of Michigan School of Information. She runs the Social Innovations Group, a research group that designs, builds, and enhances technologies to solve real-world problems. Her research has been cited over 4,600 times according to Google Scholar.
Batya Friedman is an American professor in the University of Washington Information School. She is also an adjunct professor in the Paul G. Allen School of Computer Science and Engineering and adjunct professor in the Department of Human-Centered Design and Engineering, where she directs the Value Sensitive Design Research Lab. She received her PhD in learning sciences from the University of California, Berkeley School of Education in 1988, and has an undergraduate degree from Berkeley in computer science and mathematics.
{{cite journal}}
: Cite journal requires |journal=
(help)