Computational trust

Last updated

In information security, computational trust is the generation of trusted authorities or user trust through cryptography. In centralised systems, security is typically based on the authenticated identity of external parties. Rigid authentication mechanisms, such as public key infrastructures (PKIs) [1] or Kerberos, [2] have allowed this model to be extended to distributed systems within a few closely collaborating domains or within a single administrative domain. During recent years, computer science has moved from centralised systems to distributed computing. This evolution has several implications for security models, policies and mechanisms needed to protect users’ information and resources in an increasingly interconnected computing infrastructure. [3]

Contents

Identity-based security mechanisms cannot authorise an operation without authenticating the claiming entity. This means that no interaction can occur unless both parties are known by their authentication frameworks. Spontaneous interactions would, therefore, require a single, or a few trusted certificate authorities (CAs). In the present context, PKI has not been considered since they have issues[ which? ], thus it is unlikely that they will establish themselves as a reference standard in the near future. A user who wishes to collaborate with another party can choose between enabling security and thereby disabling spontaneous collaboration, or disabling security and enabling spontaneous collaboration. It is fundamental that mobile users and devices can authenticate in an autonomous way without relying on a common authentication infrastructure. In order to face this problem, we need to examine the challenges introduced by "global computing", [4] a term coined by the EU for the future of the global information society, and to identify their impact on security.

Cryptocurrencies, such as Bitcoin, use methods such as proof of work (PoW) to achieve computational trust inside the transaction network.

History

Computational Trust applies the human notion of trust to the digital world, that is seen as malicious rather than cooperative. The expected benefits, according to Marsh et al., result in the use of others' ability through delegation, and in increased cooperation in an open and less protected environment. Research in the area of computational mechanisms for trust and reputation in virtual societies is directed towards increased reliability and performance of digital communities. [5]

A trust-based decision in a specific domain is a multi-stage process. The first step of this process consists in identifying and selecting the proper input data, that is, the trust evidence. In general, these are domain-specific and are derived from an analysis conducted over the application involved. In the next step, a trust computation is performed on the evidence to produce trust values, that means the estimation of the trustworthiness of entities in that particular domain. The selection of evidence and the subsequent trust computation are informed by a notion of trust defined in the trust model. Finally, the trust decision is taken by considering the computed values and exogenous factors, like disposition or risk assessments.

Defining trust

These concepts have heightened relevance in the last decade in computer science, particularly in the area of distributed artificial intelligence. The multi-agent system paradigm and the growth of e-commerce have increased interest in trust and reputation. In fact, trust and reputation systems have been recognized as the key factors for electronic commerce. These systems are used by intelligent software agents as an incentive in decision-making, when deciding whether or not to honor contracts, and as a mechanism to search trustworthy exchange partners. In particular, reputation is used in electronic markets as a trust-enforcing mechanism or as a method to avoid cheaters and frauds. [6]

Another area of application of these concepts in agent technology, is teamwork and cooperation. [7] Several definitions of the human notion of trust have been proposed during the last years in different domains from sociology, psychology to political and business science. These definitions may even change in accordance with the application domain. For example, Romano's recent definition [8] tries to encompass the previous work in all these domains:

Trust is a subjective assessment of another’s influence in terms of the extent of one’s perception about the quality and significance of another’s impact over one’s outcomes in a given situation, such that one’s expectation of, openness to, and inclination toward such influence provide a sense of control over the potential outcomes of the situation.

Trust and reputation both have a social value. When someone is trustworthy, that person may be expected to perform in a beneficial or at least not in a suspicious way that assure others, with high probability, good collaborations with him. On the contrary, when someone appears not to be trustworthy, others refrain from collaborating since there is a lower level of probability that these collaborations will be successful. [9]

Trust is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently or his capacity ever to be able to monitor it) and in a context in which it affects his own action.

Trust is strongly connected to confidence and it implies some degrees of uncertainty, hopefulness or optimism. Eventually, Marsh [10] addressed the issue of formalizing trust as a computational concept in his PhD thesis. His trust model is based on social and psychological factors.

Trust model classification

A lot of proposals have appeared in the literature and here a selection of computational trust and reputation models, that represent a good sample of the current research, is presented. [11]

Trust and reputation can be analysed from different points of view and can be applied in many situations. The next classification is based considering the peculiar characteristics of these models and the environment where they evolve.

Conceptual model

Trust and reputation model can be characterized as:

In models based on a cognitive approach, Trust and reputation are made up of underlying beliefs and are a function of the degree of these beliefs. [12] The mental states, that lead to trust another agent or to assign a reputation, are an essential part of the model, as well as the mental consequences of the decision and the act of relying on another agent;

In neurological trust models based neurological theories on the interaction between affective and cognitive states are modeled on a neurological level as well by using theories on the embodiment of emotions. [13] In these models the trust dynamics relate to experiences with (external) sources, both from a cognitive and affective perspective. More specifically for feeling the emotion associated to a mental state, converging recursive body loops are modeled. In addition, based on Hebbian learning (for the strength of the connections to the emotional responses) different adaptation processes are introduced, which are inspired by the Somatic Marker Hypothesis. [14]

Trust and reputation are considered subjective probabilities by which the individual A, expects the individual B to perform a given action on which its welfare depends. [15]

In this approach, trust and reputation are not the result of a mental state of the agent in a cognitive sense, but the result of a more pragmatic game with utility functions and numerical aggregation of past interactions.

Information sources

It is possible to sort out models by considering the information sources used to compute Trust and reputation values. The traditional information sources are direct experiences and witness information, but recent models have started to consider the connection between information and the sociological aspect of agent's behavior. When the model contains several information sources it can increase the reliability of the results, but conversely, it can increase the complexity of the model.

Direct experiences

Direct experience is the most relevant and reliable information source for a Trust/reputation model. Two types of direct experiences can be recognizable:

  • the experience based on the direct interaction with the interlocutor;
  • the experience based on the observed interaction of the other members of a community.
Witness information

Witness information, also called indirect information, is what comes from the experience of other members of community. It can be based on their own direct experience or on other data they gathered from others’ experience. Witness information is usually the most abundant but its use is complex for trust and reputation modelling. In fact, it introduces uncertainty and agents can manipulate or hide parts of the information for their own benefit.

Sociological information

People that belong to a community establish different types of relations. Each individual plays one or several roles in that society, influencing their behavior and the interaction with other people. In a multi-agent system, where there are plenty of interactions, the social relations among agents are a simplified reflection of the more complex relations of their human counterparts. [16] Only a few trust and reputation models adopt this sociological information, using techniques like social network analysis. These methods study social relationships among individuals in a society that emerged as a set of methods for the analysis of social structures, methods that specifically allow an investigation of the relational aspects of these structures. [17]

Prejudice and bias

Prejudice is another, though uncommon, mechanism that influences trust and reputation. According to this method, an individual is given properties of a particular group that make him recognisable as a member. These can be signs such as a uniform, a definite behavior, etc. [18]

As most people today use the word, prejudice refers to a negative or hostile attitude towards another social group, often racially defined. However, this negative connotation has to be revised when applied to agent communities. The set of signs used in computational trust and reputations models are usually out of the ethical discussion, differently from the signs used in human societies, like skin color or gender.

Most of the literature in cognitive and social sciences claims that humans exhibit non-rational, biased behavior with respect to trust. Recently biased human trust models have been designed, analyzed and validated against empirical data. The results show that such biased trust models are able to predict human trust significantly better than unbiased trust models. [19] [20]

Discussion on trust/reputation models

The most relevant sources of information considered by the trust and reputation models presented before, are direct experiences and witness information. In e-markets, sociological information is almost non-existent and, in order to increase the efficiency of actual trust and reputation models, it should be considered. However, there is no reason to increase the complexity of models introducing trust evidence if, later, they have to be used in an environment where it is not possible to realise their capabilities. The aggregation of more trust and reputation evidence is useful in a computational model but it can increase its complexity making a general solution difficult. Several models are dependent on the characteristics of the environment and a possible solution could be the use of adaptive mechanisms that can modify how to combine different sources of information in a given environment. A lot of trust and reputation definitions have been presented and there are several works that give meaning to both concepts. [21] [22] [23] [24]

There is a relation between both the concepts that should be considered in depth: reputation is a concept that helps to build trust on others. Nowadays, game theory is the predominant paradigm considered to design computational trust and reputation models. In all likelihood, this theory is taken into account because a significant number of economists and computer scientists, with a strong background in game theory and artificial intelligence techniques, are working in multi-agent and e-commerce contexts. Game theoretical models produce good results but may not be appropriate when the complexity of the agents, in terms of social relations and interaction increases, becomes too restrictive. The exploration of new possibilities should be considered and, for example, there should be a merging of cognitive approaches with game theoretical ones. Apart from that, more trust evidence should be considered, as well as time-sensitive trust metrics. [25] [26] represent the first step to encourage the improvement of computational trust. [27]

An important issue in modeling trust is represented by the transferability of trust judgements by different agents. Social scientists agree to consider unqualified trust values as not transferable, but a more pragmatic approach would conclude that qualified trust judgments are worth being transferred as far as decisions taken considering others’ opinion are better than the ones taken in isolation. In [28] the authors investigated the problem of trust transferability in open distributed environments, proposing a translation mechanism able to make information exchanged from one agent to another more accurate and useful.

Evaluation of trust models

Currently, there is no commonly accepted evaluation framework or benchmark that would allow for a comparison of the models under a set of representative and common conditions. A game-theoretic approach in this direction has been proposed, [29] where the configuration of a trust model is optimized assuming attackers with optimal attack strategies; this allows in a next step to compare the expected utility of different trust models. Similarly, a model-based analytical framework for predicting the effectiveness of reputation mechanisms against arbitrary attack models in arbitrary system models has been proposed [30] for Peer-to-Peer systems.

See also

Related Research Articles

<span class="mw-page-title-main">Cognitive science</span> Interdisciplinary scientific study of cognitive processes

Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition. Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."

Simple Network Management Protocol (SNMP) is an Internet Standard protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behaviour. Devices that typically support SNMP include cable modems, routers, switches, servers, workstations, printers, and more.

Social simulation is a research field that applies computational methods to study issues in the social sciences. The issues explored include problems in computational law, psychology, organizational behavior, sociology, political science, economics, anthropology, geography, engineering, archaeology and linguistics.

Experimental economics is the application of experimental methods to study economic questions. Data collected in experiments are used to estimate effect size, test the validity of economic theories, and illuminate market mechanisms. Economic experiments usually use cash to motivate subjects, in order to mimic real-world incentives. Experiments are used to help understand how and why markets and other exchange systems function as they do. Experimental economics have also expanded to understand institutions and the law.

Soar is a cognitive architecture, originally created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University. It is now maintained and developed by John Laird's research group at the University of Michigan.

<span class="mw-page-title-main">Trust (social science)</span> Assumption of and reliance on the honesty of another party

Trust is the willingness of one party to become vulnerable to another party on the presumption that the trustee will act in ways that benefit the trustor. In addition, the trustor does not have control over the actions of the trustee. Scholars distinguish between generalized trust, which is the extension of trust to a relatively large circle of unfamiliar others, and particularized trust, which is contingent on a specific situation or a specific relationship.

<span class="mw-page-title-main">Trust metric</span> Term in psychology and sociology

In psychology and sociology, a trust metric is a measurement or metric of the degree to which one social actor trusts another social actor. Trust metrics may be abstracted in a manner that can be implemented on computers, making them of interest for the study and engineering of virtual communities, such as Friendster and LiveJournal.

<span class="mw-page-title-main">Computational sociology</span> Branch of the discipline of sociology

Computational sociology is a branch of sociology that uses computationally intensive methods to analyze and model social phenomena. Using computer simulations, artificial intelligence, complex statistical methods, and analytic approaches like social network analysis, computational sociology develops and tests theories of complex social processes through bottom-up modeling of social interactions.

Multilevel security or multiple levels of security (MLS) is the application of a computer system to process information with incompatible classifications, permit access by users with different security clearances and needs-to-know, and prevent users from obtaining access to information for which they lack authorization. There are two contexts for the use of multilevel security. One is to refer to a system that is adequate to protect itself from subversion and has robust mechanisms to separate information domains, that is, trustworthy. Another context is to refer to an application of a computer that will require the computer to be strong enough to protect itself from subversion and possess adequate mechanisms to separate information domains, that is, a system we must trust. This distinction is important because systems that need to be trusted are not necessarily trustworthy.

There are various definitions of autonomous agent. According to Brustoloni (1991)

"Autonomous agents are systems capable of autonomous, purposeful action in the real world."

The actor model in computer science is a mathematical model of concurrent computation that treats an actor as the basic building block of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging.

Reputation systems are programs or algorithms that allow users to rate each other in online communities in order to build trust through reputation. Some common uses of these systems can be found on E-commerce websites such as eBay, Amazon.com, and Etsy as well as online advice communities such as Stack Exchange. These reputation systems represent a significant trend in "decision support for Internet mediated service provisions". With the popularity of online communities for shopping, advice, and exchange of other important information, reputation systems are becoming vitally important to the online experience. The idea of reputation systems is that even if the consumer can't physically try a product or service, or see the person providing information, that they can be confident in the outcome of the exchange through trust built by recommender systems.

In computer security, shoulder surfing is a type of social engineering technique used to obtain information such as personal identification numbers (PINs), passwords and other confidential data by looking over the victim's shoulder. Unauthorized users watch the keystrokes inputted on a device or listen to sensitive information being spoken, which is also known as eavesdropping.

Virgil Dorin Gligor is a Romanian-American professor of electrical and computer engineering who specializes in the research of network security and applied cryptography.

Ron Sun is a cognitive scientist who made significant contributions to computational psychology and other areas of cognitive science and artificial intelligence. He is currently professor of cognitive sciences at Rensselaer Polytechnic Institute, and formerly the James C. Dowell Professor of Engineering and Professor of Computer Science at University of Missouri. He received his Ph.D. in 1992 from Brandeis University.

<span class="mw-page-title-main">Human–computer interaction</span> Academic discipline studying the relationship between computer systems and their users

Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".

Informatics is the study of computational systems. According to the ACM Europe Council and Informatics Europe, informatics is synonymous with computer science and computing as a profession, in which the central notion is transformation of information. In other countries, the term "informatics" is used with a different meaning in the context of library science, in which case it is synonymous with data storage and retrieval.

Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.

Swift trust is a form of trust occurring in temporary organizational structures, which can include quick starting groups or teams. It was first explored by Debra Meyerson and colleagues in 1996. In swift trust theory, a group or team assumes trust initially, and later verifies and adjusts trust beliefs accordingly.

Opportunistic mobile social networks are a form of mobile ad hoc networks that exploit the human social characteristics, such as similarities, daily routines, mobility patterns, and interests to perform the message routing and data sharing. In such networks, the users with mobile devices are able to form on-the-fly social networks to communicate with each other and share data objects.

References

  1. Weise, J. (August 2001). "Public Key Infrastructure Overview". SunPs Global Security Practice, SunMicrosystems.{{cite journal}}: Cite journal requires |journal= (help)
  2. Kohl J.; B. C. Neuman (1993). "The Kerberos Network Authentication Service(Version 5)". Internet Request for Comments RFC-1510.{{cite journal}}: Cite journal requires |journal= (help)
  3. Seigneur J.M. (2005). "Trust, Security and Privacy in Global Computing". PhD Thesis, University of Dublin, Trinity College.{{cite journal}}: Cite journal requires |journal= (help)
  4. "IST, Global Computing, EU". 2004.{{cite journal}}: Cite journal requires |journal= (help)
  5. Longo L.; Dondio P.; Barrett S. (2007). "Temporal Factors to evaluate trustworthiness of virtual identities" (PDF). Third International Workshop on the Value of Security through Collaboration, SECURECOMM.{{cite journal}}: Cite journal requires |journal= (help)
  6. Dellarocas C. (2003). "The digitalization of Word-Of-Mouth: Promise and Challenges of Online Reputation Mechanism". Management Science.{{cite journal}}: Cite journal requires |journal= (help)
  7. Montaner M.; Lopez B.; De La Rosa J. (2002). "Developing Trust in Recommender Agents". Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-02).{{cite journal}}: Cite journal requires |journal= (help)
  8. Romano D.M. (2003). "The Nature of Trust: Conceptual and Operational Clarification". Louisiana State University, PhD Thesis.{{cite journal}}: Cite journal requires |journal= (help)
  9. Gambetta D. "Can We Trust Trust". Trust: Making and Breaking Cooperative Relations. Chapt. Can We Trust Trust? Basil Blackwell, Oxford, pp. 213-237.{{cite journal}}: Cite journal requires |journal= (help)
  10. Marsh S. (1994). "Formalizing Trust as a Computational Concept". PhD thesis, University of Stirling, Department of Computer Science and Mathematics.{{cite journal}}: Cite journal requires |journal= (help)
  11. Sabater J.; Sierra C. (2005). "Review on Computational Trust and Reputation Models". Artificial Intelligence Review, 24:33-60, Springer.{{cite journal}}: Cite journal requires |journal= (help)
  12. Esfandiari B.; Chandrasekharan S. (2001). "On How Agents Make Friends: Mechanism for Trust Acquisition". In proocedings of the Fourth Workshop on Deception Fraud and Trust in Agent Societies, Montreal, Canada. pp. 27-34.{{cite journal}}: Cite journal requires |journal= (help)
  13. Hoogendoorn, Mark; Jaffry, S. Waqar; Treur, Jan (2011). Advances in Cognitive Neurodynamics (II). Springer, Dordrecht. pp. 523–536. CiteSeerX   10.1.1.160.2535 . doi:10.1007/978-90-481-9695-1_81. ISBN   9789048196944.
  14. Jaffry, S. Waqar; Treur, Jan (2009-12-01). Comparing a Cognitive and a Neural Model for Relative Trust Dynamics. Neural Information Processing. Lecture Notes in Computer Science. Springer, Berlin, Heidelberg. pp. 72–83. CiteSeerX   10.1.1.149.7940 . doi:10.1007/978-3-642-10677-4_8. ISBN   9783642106767.
  15. Gambetta D. "Can We Trust Trust?". In. Trust: Making and Breaking Cooperative Relations. Chapt. Can We Trust Trust? Basil Blackwell, Oxford, pp. 213-237.{{cite journal}}: Cite journal requires |journal= (help)
  16. Hoogendoorn, M.; Jaffry, S. W. (August 2009). "The Influence of Personalities Upon the Dynamics of Trust and Reputation". 2009 International Conference on Computational Science and Engineering. Vol. 3. pp. 263–270. doi:10.1109/CSE.2009.379. ISBN   978-1-4244-5334-4. S2CID   14294422.
  17. Scott, J.; Tallia, A; Crosson, JC; Orzano, AJ; Stroebel, C; Dicicco-Bloom, B; O'Malley, D; Shaw, E; Crabtree, B (September 2005). "Social Network Analysis as an Analytic Tool for Interaction Patterns in Primary Care Practices". Annals of Family Medicine. 3 (5): 443–8. doi:10.1370/afm.344. PMC   1466914 . PMID   16189061.
  18. Bacharach M.; Gambetta D. (2001). "Trust in Society". Chapt. Trust in signs. Russel Sage Foundation, .{{cite journal}}: Cite journal requires |journal= (help)
  19. Hoogendoorn, M.; Jaffry, S.W.; Maanen, P.P. van & Treur, J. (2011). Modeling and Validation of Biased Human Trust. IEEE Computer Society Press, 2011.
  20. Mark, Hoogendoorn; Waqar, Jaffry, Syed; Peter-Paul, van Maanen; Jan, Treur (2013-01-01). "Modelling biased human trust dynamics". Web Intelligence and Agent Systems. 11 (1): 21–40. doi:10.3233/WIA-130260. ISSN   1570-1263.
  21. McKnight D.H.; Chervany N.L. (1996). "The meanings of trust. Technical report". university of Minnesota Management Information Systems Research Center.{{cite journal}}: Cite journal requires |journal= (help)
  22. McKnight D.H.; Chervany N.L. (2002). "Conceptualizing Trust: A Typology and E-Commerce Customer Relationships Model". In: Proceedings of the 34th Hawaii International Conference on System Sciences.{{cite journal}}: Cite journal requires |journal= (help)
  23. Mui L.; Halberstadt A.; Mohtashemi M. (2002). "Notions of Reputation in Multi-Agent Systems: a Review". In: Proceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-02), Bologna, Italy, pp. 280-287.{{cite journal}}: Cite journal requires |journal= (help)
  24. Dondio, P.; Longo, L. (2011). "Trust-Based Techniques for Collective Intelligence in Social Search Systems". Next Generation Data Technologies For Collective Computational Intelligence. Studies in Computational Intelligence. Vol. 352. Springer. pp. 113–135. doi:10.1007/978-3-642-20344-2_5. ISBN   978-3-642-20343-5.
  25. Longo L. (2007). "Security Through Collaboration in Global Computing: a Computational Trust Model Based on Temporal Factors to Evaluate Trustworthiness of Virtual Identities". Master Degree, Insubria University.{{cite journal}}: Cite journal requires |journal= (help)
  26. D. Quercia; S. Hailes; L. Capra (2006). "B-trust: Bayesian Trust Framework for Pervasive Computing" (PDF). iTrust.{{cite journal}}: Cite journal requires |journal= (help)
  27. Seigneur J.M. (2006). "Seigneur J.M., Ambitrust? Immutable and Context Aware Trust Fusion". Technical Report, Univ. of Geneva.{{cite journal}}: Cite journal requires |journal= (help)
  28. Dondio, P.; Longo, L.; et al. (June 18–20, 2008). "A Translation Mechanism for Recommendations" (PDF). In Karabulut, Yücel; Mitchell, John C.; Herrmann, Peter; Jensen, Christian Damsgaard (eds.). Trust Management II Proceedings of IFIPTM 2008: Joint iTrust and PST Conferences on Privacy, Trust Management and Security. IFIP – The International Federation for Information Processing. Vol. 263. Trondheim, Norway: Springer. pp. 87–102. doi: 10.1007/978-0-387-09428-1_6 . ISBN   978-0-387-09427-4.
  29. Staab, E.; Engel, T. (2009). "Tuning Evidence-Based Trust Models". 2009 International Conference on Computational Science and Engineering. Vancouver, Canada: IEEE. pp. 92–99. doi:10.1109/CSE.2009.209. ISBN   978-1-4244-5334-4.
  30. Lagesse, B. (2012). "Analytical evaluation of P2P reputation systems" (PDF). International Journal of Communication Networks and Distributed Systems. 9: 82–96. CiteSeerX   10.1.1.407.7659 . doi:10.1504/IJCNDS.2012.047897.