Trust metric

Last updated
Schematic diagram of a web of trust Web of Trust-en.svg
Schematic diagram of a web of trust

In psychology and sociology, a trust metric is a measurement or metric of the degree to which one social actor (an individual or a group) trusts another social actor. Trust metrics may be abstracted in a manner that can be implemented on computers, making them of interest for the study and engineering of virtual communities, such as Friendster and LiveJournal.

Contents

Trust escapes a simple measurement because its meaning is too subjective for universally reliable metrics, and the fact that it is a mental process, unavailable to instruments. There is a strong argument [1] against the use of simplistic metrics to measure trust due to the complexity of the process and the 'embeddedness' of trust that makes it impossible to isolate trust from related factors.

There is no generally agreed set of properties that make a particular trust metric better than others, as each metric is designed to serve different purposes, e.g. [2] provides certain classification scheme for trust metrics. Two groups of trust metrics can be identified:

Trust metrics enable trust modelling [3] and reasoning about trust. They are closely related to reputation systems. Simple forms of binary trust metrics can be found e.g. in PGP. [4] The first commercial forms of trust metrics in computer software were in applications like eBay's Feedback Rating. Slashdot introduced its notion of karma, earned for activities perceived to promote group effectiveness, an approach that has been very influential in later virtual communities.[ citation needed ]

Empirical metrics

Empirical metrics capture the value of trust by exploring the behavior or introspection of people, to determine the perceived or expressed level of trust. Those methods combine theoretical background (determining what it is that they measure) with defined set of questions and statistical processing of results.

The willingness to cooperate, as well as actual cooperation, are commonly used to both demonstrate and measure trust. The actual value (level of trust and/or trustworthiness) is assessed from the difference between observed and hypothetical behaviors i.e. those that would have been anticipated in the absence of cooperation.

Surveys

Surveys capture the level of trust by means of both observations or introspection, but without engaging into any experiments. Respondents are usually providing answers to a set of questions or statements and responses are e.g. structured according to a Likert scale. Differentiating factors are the underlying theoretical background and contextual relevance.

One of the earliest surveys are McCroskey's scales [5] that have been used to determine authoritativeness (competence) and character (trustworthiness) of speakers. Rempel's trust scale [6] and Rotter's scale [7] are quite popular in determining the level of interpersonal trust in different settings. The Organizational Trust Inventory (OTI) [8] is an example of an exhaustive, theory-driven survey that can be used to determine the level of trust within the organisation.

For a particular research area a more specific survey can be developed. For example, the interdisciplinary model of trust, [9] has been verified using a survey while [10] uses a survey to establish the relationship between design elements of the web site and perceived trustworthiness of it.

Games

Another empirical method to measure trust is to engage participants in experiments, treating the outcome of such experiments as estimates of trust. Several games and game-like scenarios have been tried, some of which estimate trust or confidence in monetary terms (see [11] for an interesting overview).

Games of trust are designed in a way that their Nash equilibrium differ from Pareto optimum so that no player alone can maximize their own utility by altering his selfish strategy without cooperation, while cooperating partners can benefit. Trust can be therefore estimated on the basis of monetary gain attributable to cooperation.

The original 'game of trust' has been described in [12] as an abstracted investment game between an investor and his broker. The game can be played once or several times, between randomly chosen players or in pairs that know each other, yielding different results.

Several variants of the game exist, focusing on different aspects of trust as the observable behaviour. For example, rules of the game can be reversed into what can be called a game of distrust, [13] declaratory phase can be introduced [14] or rules can be presented in a variety of ways, altering the perception of participants.

Other interesting games are e.g. binary-choice trust games, [15] the gift-exchange game, [16] cooperative trust games,[ citation needed ] and various other forms of social games. Specifically the Prisoners Dilemma [17] are popularly used to link trust with economic utility and demonstrate the rationality behind reciprocity. For multi-player games, different forms of close market simulations exist. [18]

Formal metrics

Formal metrics focus on facilitating trust modelling, specifically for large scale models that represent trust as an abstract system (e.g. social network or web of trust). Consequently, they may provide weaker insight into the psychology of trust, or in particulars of empirical data collection. Formal metrics tend to have a strong foundations in algebra, probability or logic.

Representation

There is no widely recognised way to attribute value to the level of trust, with each representation of a 'trust value' claiming certain advantages and disadvantages. There are systems that assume only binary values, [19] that use fixed scale, [20] where confidence range from -100 to +100 (while excluding zero), [21] from 0 to 1 [22] [23] or from [−1 to +1); [24] where confidence is discrete or continuous, one-dimensional or have many dimensions. [25] Some metrics use ordered set of values without attempting to convert them to any particular numerical range (e.g. [26] See [27] for a detailed overview).

There is also a disagreement about the semantics of some values. The disagreement regarding the attribution of values to levels of trust is specifically visible when it comes to the meaning of zero and to negative values. For example, zero may indicate either the lack of trust (but not distrust), or lack of information, or a deep distrust. Negative values, if allowed, usually indicate distrust, but there is a doubt [28] whether distrust is simply trust with a negative sign, or a phenomenon of its own.

Subjective probability

Subjective probability [29] focuses on trustor's self-assessment about his trust in the trustee. Such an assessment can be framed as an anticipation regarding future behaviour of the trustee, and expressed in terms of probability. Such a probability is subjective as it is specific to the given trustor, their assessment of the situation, information available to him etc. In the same situation other trustors may have a different level of a subjective probability.

Subjective probability creates a valuable link between formalisation and empirical experimentation. Formally, subjective probability can benefit from available tools of probability and statistics. Empirically, subjective probability can be measured through one-side bets. Assuming that the potential gain is fixed, the amount that a person bets can be used to estimate his subjective probability of a transaction.

Uncertain probabilities (subjective logic)

The logic for uncertain probabilities (subjective logic) has been introduced by Josang, [30] [31] where uncertain probabilities are called subjective opinions. This concept combines probability distribution with uncertainty, so that each opinion about trust can be viewed as a distribution of probability distributions where each distribution is qualified by associated uncertainty. The foundation of the trust representation is that an opinion (an evidence or a confidence) about trust can be represented as a four-tuple (trust, distrust, uncertainty, base rate), where trust, distrust and uncertainty must add up to one, and hence are dependent through additivity.

Subjective logic is an example of computational trust where uncertainty is inherently embedded in the calculation process and is visible at the output. It is not the only one, it is e.g. possible to use a similar quadruplet (trust, distrust, unknown, ignorance) to express the value of confidence, [32] as long as the appropriate operations are defined. Despite the sophistication of the subjective opinion representation, the particular value of a four-tuple related to trust can be easily derived from a series of binary opinions about a particular actor or event, thus providing a strong link between this formal metric and empirically observable behaviour.

Finally, there are CertainTrust [33] and CertainLogic. [34] Both share a common representation, which is equivalent to subjective opinions, but based on three independent parameters named 'average rating', 'certainty', and 'initial expectation'. Hence, there is a bijective mapping between the CertainTrust-triplet and the four-tuple of subjective opinions.

Fuzzy logic

Fuzzy systems, [35] as trust metrics can link natural language expressions with a meaningful numerical analysis.

Application of fuzzy logic to trust has been studied in the context of peer-to-peer networks [36] to improve peer rating. Also for grid computing [37] it has been demonstrated that fuzzy logic allows to solve security issues in reliable and efficient manner.

Properties of trust metrics

The set of properties that should be satisfied by a trust metric vary, depending on the application area. Following is a list of typical properties.

Transitivity

Transitivity is a highly desired property of a trust metric. [38] In situations where A trusts B and B trusts C, transitivity concerns the extent to which A trusts C. Without transitivity, trust metrics are unlikely to be used to reason about trust in more complex relationships.

The intuition behind transitivity follows everyday experience of 'friends of a friend' (FOAF), the foundation of social networks. However, the attempt to attribute exact formal semantics to transitivity reveals problems, related to the notion of a trust scope or context. For example, [39] defines conditions for the limited transitivity of trust, distinguishing between direct trust and referral trust. Similarly, [40] shows that simple trust transitivity does not always hold, based on information on the Advogato model and, consequently, have proposed new trust metrics.

The simple, holistic approach to transitivity is characteristic to social networks (FOAF, Advogato). It follows everyday intuition and assumes that trust and trustworthiness apply to the whole person, regardless of the particular trust scope or context. If one can be trusted as a friend, one can be also trusted to recommend or endorse another friend. Therefore, transitivity is semantically valid without any constraints, and is a natural consequence of this approach.

The more thorough approach distinguishes between different scopes/contexts of trust, and does not allow for transitivity between contexts that are semantically incompatible or inappropriate. A contextual approach may, for instance, distinguish between trust in a particular competence, trust in honesty, trust in the ability to formulate a valid opinion, or trust in the ability to provide reliable advice about other sources of information. A contextual approach is often used in trust-based service composition. [41] The understanding that trust is contextual (has a scope) is a foundation of a collaborative filtering.

Operations

For a formal trust metric to be useful, it should define a set of operations over values of trust in such way that the result of those operations produce values of trust. Usually at least two elementary operators are considered:

The exact semantics of both operators are specific to the metric. Even within one representation, there is still a possibility for a variety of semantic interpretations. For example, for the representation as the logic for uncertain probabilities, trust fusion operations can be interpreted by applying different rules (Cumulative fusion, averaging fusion, constraint fusion (Dempster's rule), Yager's modified Dempster's rule, Inagaki's unified combination rule, Zhang's centre combination rule, Dubois and Prade's disjunctive consensus rule etc.). Each interpretations leads to different results, depending on the assumptions for trust fusion in the particular situatation to be modelled. See [42] [43] for detailed discussions.

Scalability

The growing size of networks of trust make scalability another desired property, meaning that it is computationally feasible to calculate the metric for large networks. Scalability usually puts two requirements of the metric:

Attack resistance

Attack resistance is an important non-functional property of trust metrics which reflects their ability not to be overly influenced by agents who try to manipulate the trust metric and who participate in bad faith (i.e. who aim to abuse the presumption of trust).

The free software developer resource Advogato is based on a novel approach to attack-resistant trust metrics of Raph Levien. Levien observed that Google's PageRank algorithm can be understood to be an attack resistant trust metric rather similar to that behind Advogato.

See also

Related Research Articles

Fuzzy logic is a form of many-valued logic in which the truth value of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. By contrast, in Boolean logic, the truth values of variables may only be the integer values 0 or 1.

<span class="mw-page-title-main">Uncertainty</span> Situations involving imperfect or unknown information

Uncertainty refers to epistemic situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially observable or stochastic environments, as well as due to ignorance, indolence, or both. It arises in any number of fields, including insurance, philosophy, physics, statistics, economics, finance, medicine, psychology, sociology, engineering, metrology, meteorology, ecology and information science.

<span class="mw-page-title-main">Dempster–Shafer theory</span> Mathematical framework to model epistemic uncertainty

The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory (DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. First introduced by Arthur P. Dempster in the context of statistical inference, the theory was later developed by Glenn Shafer into a general framework for modeling epistemic uncertainty—a mathematical theory of evidence. The theory allows one to combine evidence from different sources and arrive at a degree of belief that takes into account all the available evidence.

<span class="mw-page-title-main">Decision theory</span> Branch of applied probability theory

Decision theory is a branch of applied probability theory and analytic philosophy concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome.

<span class="mw-page-title-main">Trust (social science)</span> Assumption of and reliance on the honesty of another party

Trust is the willingness of one party to become vulnerable to another party on the presumption that the trustee will act in ways that benefit the trustor. In addition, the trustor does not have control over the actions of the trustee. Scholars distinguish between generalized trust, which is the extension of trust to a relatively large circle of unfamiliar others, and particularized trust, which is contingent on a specific situation or a specific relationship.

The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.

In statistics, classification is the problem of identifying which of a set of categories (sub-populations) an observation belongs to. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient.

Reputation systems are programs or algorithms that allow users to rate each other in online communities in order to build trust through reputation. Some common uses of these systems can be found on E-commerce websites such as eBay, Amazon.com, and Etsy as well as online advice communities such as Stack Exchange. These reputation systems represent a significant trend in "decision support for Internet mediated service provisions". With the popularity of online communities for shopping, advice, and exchange of other important information, reputation systems are becoming vitally important to the online experience. The idea of reputation systems is that even if the consumer can't physically try a product or service, or see the person providing information, that they can be confident in the outcome of the exchange through trust built by recommender systems.

Probabilistic logic involves the use of probability and logic to deal with uncertain situations. Probabilistic logic extends traditional logic truth tables with probabilistic expressions. A difficulty of probabilistic logics is their tendency to multiply the computational complexities of their probabilistic and logical components. Other difficulties include the possibility of counter-intuitive results, such as in case of belief fusion in Dempster–Shafer theory. Source trust and epistemic uncertainty about the probabilities they provide, such as defined in subjective logic, are additional elements to consider. The need to deal with a broad variety of contexts and issues has led to many different proposals.

Subjective logic is a type of probabilistic logic that explicitly takes epistemic uncertainty and source trust into account. In general, subjective logic is suitable for modeling and analysing situations involving uncertainty and relatively unreliable sources. For example, it can be used for modeling and analysing trust networks and Bayesian networks.

In information security, computational trust is the generation of trusted authorities or user trust through cryptography. In centralised systems, security is typically based on the authenticated identity of external parties. Rigid authentication mechanisms, such as public key infrastructures (PKIs) or Kerberos, have allowed this model to be extended to distributed systems within a few closely collaborating domains or within a single administrative domain. During recent years, computer science has moved from centralised systems to distributed computing. This evolution has several implications for security models, policies and mechanisms needed to protect users’ information and resources in an increasingly interconnected computing infrastructure.

Neural modeling field (NMF) is a mathematical framework for machine learning which combines ideas from neural networks, fuzzy logic, and model based recognition. It has also been referred to as modeling fields, modeling fields theory (MFT), Maximum likelihood artificial neural networks (MLANS). This framework has been developed by Leonid Perlovsky at the AFRL. NMF is interpreted as a mathematical description of the mind's mechanisms, including concepts, emotions, instincts, imagination, thinking, and understanding. NMF is a multi-level, hetero-hierarchical system. At each level in NMF there are concept-models encapsulating the knowledge; they generate so-called top-down signals, interacting with input, bottom-up signals. These interactions are governed by dynamic equations, which drive concept-model learning, adaptation, and formation of new concept-models for better correspondence to the input, bottom-up signals.

A probabilistic logic network (PLN) is a conceptual, mathematical and computational approach to uncertain inference; inspired by logic programming, but using probabilities in place of crisp (true/false) truth values, and fractional uncertainty in place of crisp known/unknown values. In order to carry out effective reasoning in real-world circumstances, artificial intelligence software must robustly handle uncertainty. However, previous approaches to uncertain inference do not have the breadth of scope required to provide an integrated treatment of the disparate forms of cognitively critical uncertainty as they manifest themselves within the various forms of pragmatic inference. Going beyond prior probabilistic approaches to uncertain inference, PLN is able to encompass within uncertain logic such ideas as induction, abduction, analogy, fuzziness and speculation, and reasoning about time and causality.

Lateral computing is a lateral thinking approach to solving computing problems. Lateral thinking has been made popular by Edward de Bono. This thinking technique is applied to generate creative ideas and solve problems. Similarly, by applying lateral-computing techniques to a problem, it can become much easier to arrive at a computationally inexpensive, easy to implement, efficient, innovative or unconventional solution.

Type-2 fuzzy sets and systems generalize standard Type-1 fuzzy sets and systems so that more uncertainty can be handled. From the beginning of fuzzy sets, criticism was made about the fact that the membership function of a type-1 fuzzy set has no uncertainty associated with it, something that seems to contradict the word fuzzy, since that word has the connotation of much uncertainty. So, what does one do when there is uncertainty about the value of the membership function? The answer to this question was provided in 1975 by the inventor of fuzzy sets, Lotfi A. Zadeh, when he proposed more sophisticated kinds of fuzzy sets, the first of which he called a "type-2 fuzzy set". A type-2 fuzzy set lets us incorporate uncertainty about the membership function into fuzzy set theory, and is a way to address the above criticism of type-1 fuzzy sets head-on. And, if there is no uncertainty, then a type-2 fuzzy set reduces to a type-1 fuzzy set, which is analogous to probability reducing to determinism when unpredictability vanishes.

Perceptual computing is an application of Zadeh's theory of computing with words on the field of assisting people to make subjective judgments.

In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminative model that extends a standard maximum entropy classifier by assuming that the unknown values to be learnt are connected in a Markov chain rather than being conditionally independent of each other. MEMMs find applications in natural language processing, specifically in part-of-speech tagging and information extraction.

In information system and information technology, trust management is an abstract system that processes symbolic representations of social trust, usually to aid automated decision-making process. Such representations, e.g. in a form of cryptographic credentials, can link the abstract system of trust management with results of trust assessment. Trust management is popular in implementing information security, specifically access control policies.

<span class="mw-page-title-main">Glossary of artificial intelligence</span> List of definitions of terms and concepts commonly used in the study of artificial intelligence

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

In logic, a finite-valued logic is a propositional calculus in which truth values are discrete. Traditionally, in Aristotle's logic, the bivalent logic, also known as binary logic was the norm, as the law of the excluded middle precluded more than two possible values for any proposition. Modern three-valued logic allows for an additional possible truth value.

References

  1. Castelfranchi, C. and Falcone, R. (2000) Trust is much more than subjective probability: Mental components and sources of trust. Proc. of the 33rd Hawaii Int. Conf. on System Sciences (HICSS2000). Vol. 6.
  2. Ziegler, C.-N., and Lausen, G. (2005) Propagation Models for Trust and Distrust in Social Networks. Inf. Syst. Frontiers vol. 7, no. 4–5, pp. 337–358
  3. Marsh, S. P. (1994) Formalising Trust as a Computational Concept. University of Stirling PhD thesis.
  4. Zimmermann, P. (1993) Pretty Good Privacy User's Guide, Volume I and II. Distributed with the PGP software
  5. James C. McCroskey, J. C. (1966) Scales for the Measurement of Ethos. Speech Monographs, 33, 65–72.
  6. Rempel, J. K., Holmes, J. G. and Zanna, M. P. (1985): Trust in close relationships. Journal of Personality and Social Psychology. vol. 49 no. 1, pp. 95–112. 1985.
  7. Rotter, J. B. (1971) Generalized expectancies for interpersonal trust. American Psychologist, vol. 26 no. 5 pp. 443–52.
  8. Cummings, L. L., and Bromiley, P. (1996) The Organizational Trust Inventory (OTI): Development and Validation. In: Kramer, R. M. and Tyler, T. R.: Trust in Organizations. Sage Publications.
  9. McKnight, D. H., Chervany, N. L. (2001) Conceptualizing Trust: A Typology and E-Commerce Customer Relationships Model. Proc. of the 34th Hawaii Int. Conf. on System Sciences
  10. Corritore, C. L. et al (2005) Measuring Online Trust of Websites: Credibility, Perceived Ease of Use, and Risk. In: Proc. of Eleventh Americas Conf. on Information Systems, Omaha, NE, USA pp. 2419–2427.
  11. Keser, C. (2003) Experimental games for the design of reputation management systems. IBM Systems J., vol. 42, no. 3. Article
  12. Berg, J., Dickhaut, J., and McCabe, K. (1995) Trust, Reciprocity, and Social History, Games and Economic Behavior 10, 122–142
  13. Bohnet, I., and Meier, S. (2005) Deciding to distrust. KSG Working Paper No. RWP05-049.
  14. Airiau, S., and Sen, S. (2006) Learning to Commit in Repeated Games. In: Proc. of the Fifth Int. Joint Conf. on Autonomous Agents and Multiagent Systems (AAMAS06).
  15. Camerer, C., and Weigelt, K. (1988) Experimental Tests of a Sequential Equilibrium Reputation Model. Econometrica 56(1), pp. 1–36.
  16. Fehr, E., Kirchsteiger, G., and Riedl, A. (1993) Does Fairness Prevent Market Clearing? An Experimental Investigation. Quarterly Journal of Economics 108(May), pp. 437–60.
  17. Poundstone, W. (1992) Prisoner's Dilemma. Doubleday, NY. 1992
  18. Bolton, G. E., Elena Katok, E., and Ockenfels, A. (2003) How Effective are Electronic Reputation Mechanisms? An Experimental Investigation.
  19. Adams, C., and Lloyd, S. (2002) Understanding PKI: Concepts, Standards, and Deployment Considerations. Sams.
  20. Zimmermann, P. (ed.) (1994) PGP User's Guide. MIT Press, Cambridge.
  21. Tyrone Grandison, T. (2003) Trust Management for Internet Applications. PhD thesis, University of London, UK.
  22. Mui, L. et al. (2002) A Computational Model of Trust and Reputation. 35th Hawaii Int. Conf. on System Science (HICSS).
  23. Richters, O., Peixoto. T.P. (2011) Trust Transitivity in Social Networks. PLoS ONE 6(4): e18384. doi : 10.1371/journal.pone.0018384
  24. Marsh, S. P. (1994) Formalising Trust as a Computational Concept. University of Stirling PhD thesis.
  25. Gujral, N., DeAngelis, D., Fullam, K. K., and Barber, K. S. (2006) Modelling Multi-Dimensional Trust. In: Proc. of Fifth Int. Conf. on Autonomous Agents and Multiagent Systems AAMAS-06. Hakodate, Japan.
  26. Nielsen, M. and Krukow, K. (2004) On the Formal Modelling of Trust in Reputation-Based Systems. In: Karhumaki, J. et al. (Eds.): Theory Is Forever, Essays Dedicated to Arto Salomaa on the Occasion of His 70th Birthday. Lecture Notes in Computer Science 3113 Springer.
  27. Abdul-Rahman, A. (2005) A Framework for Decentralised trust Reasoning. PhD Thesis.
  28. Cofta, P. (2006) Distrust. In: Proc. of Eight Int. Conf. on Electronic Commerce ICEC'06, Fredericton, Canada. pp. 250–258.
  29. Gambetta, D. (2000) Can We Trust Trust? In: Gambetta, D. (ed.) Trust: Making and Breaking Cooperative Relations, electronic edition, Department of Sociology, University of Oxford, chapter 13, pp. 213–237,
  30. Josang, A. (2001) A Logic for Uncertain Probabilities. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. Vol. 9 no., pp. 279–311, June 2001.
  31. Jøsang, A. (2016)
  32. Ding, L., Zhou, L., and Finin, T. (2003) Trust Based Knowledge Outsourcing for Semantic Web Agents. 2003 IEEE / WIC Int. Conf. on Web Intelligence, (WI 2003), Halifax, Canada.
  33. Ries, S. (2009) Extending bayesian trust models regarding context-dependence and user friendly representation. Proceedings of the 2009 ACM symposium on Applied Computing (ACM SAC).
  34. Ries, S.; Habib, S. M.; Mühlhäuser, M.; Varadharajan V. (2011) Certainlogic: A logic for modeling trust and uncertainty (Short Paper). Proceedings of the 4th International Conference on Trust and Trustworthy Computing (TRUST), Springer .
  35. Falcone, R., Pezzulo, G., and Castelfranchi, C. (2003) A Fuzzy Approach to a Belief-Based Trust Computation. In: R. Falcone at al. (Eds.): AAMAS 2002 Ws Trust, Reputation, LNAI 2631 pp. 73–86
  36. Damiani, E. et al. (2003) Fuzzy logic techniques for reputation management in anonymous peer-to-peer systems. In Proc. of the Third Int. Conf. in Fuzzy Logic and Technology, Zittau, Germany.
  37. Song, S., Hwang, K., and Macwan, M. (2004) Fuzzy Trust Integration for Security Enforcement in Grid Computing. Proc. In: Proc. of IFIP Int. Symposium on Network and Parallel Computing (NPC-2004). LNCS 3222. pp. 9–21.
  38. Richters, O., Peixoto. T.P. (2011) Trust Transitivity in Social Networks. PLoS ONE 6(4): e18384. doi : 10.1371/journal.pone.0018384
  39. Josang, A., and Pope, S. (2005) Semantic Constraints for Trust Transitivity Second Asia-Pacific Conference on Conceptual Modelling (APCCM2005).
  40. D. Quercia, S. Hailes, L. Capra. Lightweight Distributed Trust Propagation. ICDM'07.
  41. Chang, E., Dillion, T., and Hussain, F. K. (2006) Trust and Reputation for Service-Oriented Environments: Technologies for Building Business Intelligence and Consumer Confidence. John Wiley & Sons, Ltd.
  42. Sentz, K. (2002) Combination of Evidence in Dempster–Shafer Theory Archived 2007-09-29 at the Wayback Machine . PhD Thesis.
  43. Jøsang, A. (2016)

Sources