This article may be too technical for most readers to understand.(January 2025) |
Computer user satisfaction (CUS) is the systematic measurement and evaluation of how well a computer system or application fulfills the needs and expectations of individual users. The measurement of computer user satisfaction studies how interactions with technology can be improved by adapting it to psychological preferences and tendencies.
Evaluating user satisfaction helps gauge product stability, track industry trends, and measure overall user contentment.
Fields like User Interface (UI) Design and User Experience (UX) Design focus on the direct interactions people have with a system. While UI and UX often rely on separate methodologies, they share the goal of making systems more intuitive, efficient, and appealing.
In the literature, there are a variety of terms for computer user satisfaction (CUS): "user satisfaction" and "user information satisfaction," (UIS) "system acceptance," [1] "perceived usefulness," [2] "MIS appreciation," [3] "feelings about information system's," [4] and "system satisfaction". [5] For our purposes, we will refer to CUS, or user satisfaction. Ang and Koh (1997) describe user information satisfaction as "a perceptual or subjective measure of system success." [6] This means that CUS may differ in meaning and significance dependent on the author's definition. In other words, users who are satisfied with a system according to one definition and measure may not be satisfied according to another, and vice versa.
According to Doll and Torkzadeh, CUS is defined as the opinion of the user about a specific computer application that they use. Ives and colleagues defined CUS as "the extent to which users believe the information system available to them meets their information requirements." [7]
Several studies have investigated whether or not certain factors influence the CUS. Yaverbaum's study found that people who use their computers irregularly tend to be more satisfied than regular users. [8]
Mullany, Tan, and Gallupe claim that CUS is chiefly influenced by prior experience with the system or an analogue. Conversely, motivation, they suggest, is based on beliefs about the future use of the system. [9]
Using findings from CUS, product designers, business analysts, and software engineers anticipate change and prevent user loss by identifying missing features, shifts in requirements, general improvements, or corrections.
Satisfaction measurements are most often employed by companies or organizations to design their products to be more appealing to consumers, identify practices that could be streamlined, [10] harvest personal data to sell, [11] and determine the highest price they can set for the least quality. [12] For example, based on satisfaction metrics, a company may decide to discontinue support for an unpopular service. CUS may also be extended to employee satisfaction, for which similar motivations arise. As an ulterior motive, CUS surveys may also serve to pacify the group being surveyed, as it gives them an outlet to vent frustrations.
Doll and Torkzadeh's definition of CUS is "the opinion of the user about a specific computer application, which they use." Note that the term "user" can refer to both the user of a product and the user of a device to access a product. [7]
Bailey and Pearson's 39-Factor Computer User Satisfaction (CUS) questionnaire and the User Information Satisfaction (UIS) were both surveys with multiple qualities; that is to say, the survey asks respondents to rank or rate multiple categories. Bailey and Pearson asked participants to judge 39 qualities, dividing them into five groups, each with different scales to rank or rate the qualities. The first four scales were for favorability ratings, and the fifth was an importance ranking. In the group asked to rank the importance for each quality, researchers found that their sample of users rated most important: "accuracy, reliability, timeliness, relevancy, and confidence." The qualities of least importance were found to be "feelings of control, volume of output, vendor support, degree of training, and organizational position of EDP (the electronic data processing or computing department)." However, the CUS requires 39 x 5 = 195 responses. [13] Ives, Olson, and Baroudi, amongst others, thought that so many responses could result in errors of attrition. [14] This indicates that the respondent's failure to return the questionnaire directly correlated with the length of the surveys. This can result in reduced sample sizes and distorted results, as those who return long questionnaires may have differing psychological traits from those who do not. Ives and colleagues developed the User Information Satisfaction (UIS) as a means of addressing this. The UIS only requires the respondent to rate 13 metrics. 2 scales are provided per metric, yielding 26 individual responses. However, in a recent article, Islam, Mervi, and Käköla argued that measuring CUS in industry settings is difficult as the response rate often remains low. Thus, a simpler version of the CUS measurement method is necessary. [15]
An early criticism of these measures was that surveys would become outdated as computer technology evolves. This led to the synthesis of new metric-based surveys. Doll and Torkzadeh, for example, produced a metric-based survey for the "end user." They define end-users as those who tend to interact with a computer interface alone without the involvement of operational staff. [7] McKinney, Yoon, and Zahedi developed a model and survey for measuring web customer satisfaction. [16]
Another difficulty with most of these surveys is their lack of a foundation in psychological theory. Exceptions to this were the model of web site design success developed by Zhang and von Dran [17] and the measure of CUS with e-portals developed by Cheung and Lee. [18] Both of these models drew on Herzberg's two-factor theory of motivation. [19] Consequently, their qualities were designed to measure both "satisfiers" and "hygiene factors". However, Herzberg's theory has been criticized for being too vague, particularly in its failure to distinguish between terms such as motivation, job motivation, job satisfaction, etc. [20]
A study showed that during the life of a system, satisfaction from users will on average increase in time as the users' experiences with the system increase. [21] The study found that users' cognitive style (preferred approach to problem solving) was not an accurate predictor of the user's actual CUS. Similarly, developers of the system participated, and they too did not have a strong correlation between cognitive style and actual CUS. However, a strong correlation was observed between 85 and 652 days into using the system. This means that one's manner of thinking and how their attitude towards a particular product became increasingly correlated as time went on. Some researchers have hypothesized that familiarity with a system may cause one to mentally assimilate to accommodate that system. Mullany, Tan, and Gallupe devised a system (the System Satisfaction Schedule (SSS)), which utilizes user-generated qualities and so avoids the problem of dating qualities. [21] They define CUS as the absence of user dissatisfaction and complaint, as assessed by users who have had at least some experience of using the system. Motivation, conversely, is based on beliefs about the future use of the system. [9] : 464
Currently, scholars and practitioners are experimenting with other measurement methods and further refinements to the definition of CUS. Others are replacing structured questionnaires with unstructured ones, where the respondent is asked simply to write down or dictate everything about a system that either satisfies or dissatisfies them. One problem with this approach, however, is that it tends not to yield quantitative results, making comparisons and statistical analysis difficult.
Questionnaire construction refers to the design of a questionnaire to gather statistically useful information about a given topic. When properly constructed and responsibly administered, questionnaires can provide valuable data about any given subject.
Usability can be described as the capacity of a system to provide a condition for its users to perform the tasks safely, effectively, and efficiently while enjoying the experience. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use.
The technology acceptance model (TAM) is an information systems theory that models how users come to accept and use a technology.
Job satisfaction, employee satisfaction or work satisfaction is a measure of workers' contentment with their job, whether they like the job or individual aspects or facets of jobs, such as nature of work or supervision. Job satisfaction can be measured in cognitive (evaluative), affective, and behavioral components. Researchers have also noted that job satisfaction measures vary in the extent to which they measure feelings about the job. or cognitions about the job.
A Likert scale is a psychometric scale named after its inventor, American social psychologist Rensis Likert, which is commonly used in research questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term is often used interchangeably with rating scale, although there are other types of rating scales.
The two-factor theory states that there are certain factors in the workplace that cause job satisfaction while a separate set of factors cause dissatisfaction, all of which act independently of each other. It was developed by psychologist Frederick Herzberg.
A questionnaire is a research instrument that consists of a set of questions for the purpose of gathering information from respondents through survey or statistical study. A research questionnaire is typically a mix of close-ended questions and open-ended questions. Open-ended, long-term questions offer the respondent the ability to elaborate on their thoughts. The Research questionnaire was developed by the Statistical Society of London in 1838.
The Kano model is a theory for product development and customer satisfaction developed in the 1980s by Noriaki Kano. This model provides a framework for understanding how different features of a product or service impact customer satisfaction, allowing organizations to prioritize development efforts effectively. According to the Kano Model, customer preferences are classified into five distinct categories, each representing different levels of influence on satisfaction.
SERVQUAL is a multi-dimensional research instrument designed to capture consumer expectations and perceptions of a service along five dimensions which are said to represent service quality. SERVQUAL is built on the expectancy–disconfirmation paradigm, which, in simple terms, means that service quality is understood as the extent to which consumers' pre-consumption expectations of quality are confirmed or disconfirmed by their actual perceptions of the service experience. The SERVQUAL questionnaire was first published in 1985 by a team of academic researchers in the United States, A. Parasuraman, Valarie Zeithaml and Leonard L. Berry, to measure quality in the service sector.
Quality of experience (QoE) is a measure of the delight or annoyance of a customer's experiences with a service. QoE focuses on the entire service experience; it is a holistic concept, similar to the field of user experience, but with its roots in telecommunication. QoE is an emerging multidisciplinary field based on social psychology, cognitive science, economics, and engineering science, focused on understanding overall human quality requirements.
Customer satisfaction is a term frequently used in marketing to evaluate customer experience. It is a measure of how products and services supplied by a company meet or surpass customer expectation. Customer satisfaction is defined as "the number of customers, or percentage of total customers, whose reported experience with a firm, its products, or its services (ratings) exceeds specified satisfaction goals." Enhancing customer satisfaction and fostering customer loyalty are pivotal for businesses, given the significant importance of improving the balance between customer attitudes before and after the consumption process.
Computer-assisted web interviewing (CAWI) is an Internet surveying technique in which the interviewee follows a script provided in a website. The questionnaires are made in a program for creating web interviews. The program allows for the questionnaire to contain pictures, audio and video clips, links to different web pages, etc. The website is able to customize the flow of the questionnaire based on the answers provided, as well as information already known about the participant. It is considered to be a cheaper way of surveying since one doesn't need to use people to hold surveys unlike computer-assisted telephone interviewing. With the increasing use of the Internet, online questionnaires have become a popular way of collecting information. The design of an online questionnaire has a dramatic effect on the quality of data gathered. There are many factors in designing an online questionnaire; guidelines, available question formats, administration, quality and ethic issues should be reviewed. Online questionnaires should be seen as a sub-set of a wider-range of online research methods.
Managerial psychology is a sub-discipline of industrial and organizational psychology that focuses on the effectiveness of individuals and groups in the workplace, using behavioral science.
The unified theory of acceptance and use of technology (UTAUT) is a technology acceptance model formulated by Venkatesh and others in "User acceptance of information technology: Toward a unified view" in the organisational context. The UTAUT aims to explain user intentions to use an information system and subsequent usage behavior. The theory holds that there are four key constructs: 1) performance expectancy, 2) effort expectancy, 3) social influence, and 4) facilitating conditions. The UTAUT model was later extended to consumer context with incorporation of three new constructs such as hedonic motivation, price value, and habit to the original UTAUT, the new extended version is popularly refereed as UTAUT2.
Patient experience describes the range of interactions that patients have with the healthcare system, including care from health plans, doctors, nurses, and staff in hospitals, physician practices, and other healthcare facilities. Understanding patient experience is a key step in moving toward patient-centered care. Evaluating patient experience provides a complete picture of healthcare quality. It reflects whether patients are receiving care that is respectful of and responsive to their preferences, needs, and values.
User experience evaluation (UXE) or user experience assessment (UXA) refers to a collection of methods, skills and tools utilized to uncover how a person perceives a system before, during and after interacting with it. It is non-trivial to assess user experience since user experience is subjective, context-dependent and dynamic over time. For a UXA study to be successful, the researcher has to select the right dimensions, constructs, and methods and target the research for the specific area of interest such as game, transportation, mobile, etc.
With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys are increasingly replaced by web surveys. In addition, remote interviewers could possibly keep the respondent engaged while reducing cost as compared to in-person interviewers.
The Questionnaire For User Interaction Satisfaction (QUIS) is a tool developed to assess users' subjective satisfaction with specific aspects of the human-computer interface. It was developed in 1987 by a multi-disciplinary team of researchers at the University of Maryland Human–Computer Interaction Lab. The QUIS is currently at Version 7.0 with demographic questionnaire, a measure of overall system satisfaction along 6 scales, and measures of 9 specific interface factors. These 9 factors are: screen factors, terminology and system feedback, learning factors, system capabilities, technical manuals, on-line tutorials, multimedia, teleconferencing, and software installation. Currently available in: German, Italian, Portuguese, and Spanish.
Library assessment is a process undertaken by libraries to learn about the needs of users and to evaluate how well they support these needs, in order to improve library facilities, services and resources. In many libraries successful library assessment is dependent on the existence of a 'culture of assessment' in the library whose goal is to involve the entire library staff in the assessment process and to improve customer service.
The Patient-Reported Outcomes Measurement Information System (PROMIS) provides clinicians and researchers access to reliable, valid, and flexible measures of health status that assess physical, mental, and social well–being from the patient perspective. PROMIS measures are standardized, allowing for assessment of many patient-reported outcome domains—including pain, fatigue, emotional distress, physical functioning and social role participation—based on common metrics that allow for comparisons across domains, across chronic diseases, and with the general population. Further, PROMIS tools allow for computer adaptive testing, efficiently achieving precise measurement of health status domains with few items. There are PROMIS measures for both adults and children. PROMIS was established in 2004 with funding from the National Institutes of Health (NIH) as one of the initiatives of the NIH Roadmap for Medical Research.
{{cite journal}}
: CS1 maint: url-status (link){{cite web}}
: CS1 maint: url-status (link)