Trust on first use

Last updated

Trust on first use (TOFU), or trust upon first use (TUFU), is an authentication scheme [1] used by client software which needs to establish a trust relationship with an unknown or not-yet-trusted endpoint. In a TOFU model, the client will try to look up the endpoint's identifier, usually either the public identity key of the endpoint, or the fingerprint of said identity key, in its local trust database. If no identifier exists yet for the endpoint, the client software will either prompt the user to confirm they have verified the purported identifier is authentic, or if manual verification is not assumed to be possible in the protocol, the client will simply trust the identifier which was given and record the trust relationship into its trust database. If in a subsequent connection a different identifier is received from the opposing endpoint, the client software will consider it to be untrusted.

Contents

TOFU implementations

In the SSH protocol, most client software (though not all [2] ) will, upon connecting to a not-yet-trusted server, display the server's public key fingerprint, and prompt the user to verify they have indeed authenticated it using an authenticated channel. The client will then record the trust relationship into its trust database. New identifier will cause a blocking warning that requires manual removal of the currently stored identifier.

The XMPP client Conversations uses Blind Trust Before Verification, [3] where all identifiers are blindly trusted until the user demonstrates will and ability to authenticate endpoints by scanning the QR-code representation of the identifier. After the first identifier has been scanned, the client will display a shield symbol for messages from authenticated endpoints, and red background for others.

In Signal the endpoints initially blindly trust the identifier and display non-blocking warnings when it changes. The identifier can be verified either by scanning a QR-code, or by exchanging the decimal representation of the identifier (called Safety Number) over an authenticated channel. The identifier can then be marked as verified. This changes the nature of identifier change warnings from non-blocking to blocking. [4]

In e.g. Jami and Ricochet the identifier is the user's call-sign itself. The ID can be exchanged over any channel, but until the identifier is verified over an authenticated channel, it is effectively blindly trusted. The identifier change also requires an account change, thus a MITM attack for same account requires access to endpoint's private key.

In WhatsApp the endpoint initially blindly trusts the identifier, and by default no warning is displayed when the identifier changes. If the user demonstrates will and ability to authenticate endpoints by accessing the key fingerprint (called Security Code), the client will prompt the user to enable non-blocking warnings when the identifier changes. The WhatsApp client does not allow the user to mark the identifier as verified.

In Telegram's optional secret chats the endpoints blindly trust the identifier. Changed identifier spawns a new secret chat window instead of displaying any warning. The identifiers can be verified by comparing the visual or hexadecimal representation of the identifier. The Telegram client does not allow the user to mark the identifier as verified.

In Keybase the clients can cross-sign each other's keys, which means trusting a single identifier allows verification of multiple identifiers. Keybase acts as a trusted third party that verifies a link between a Keybase account and the account's signature chain that contains the identifier history. The identifier used in Keybase is either the hash of the root of the user's signature chain, or the Keybase account name tied to it. Until the user verifies the authenticity of the signature chain's root hash (or the keybase account) over an authenticated channel, the account and its associated identifiers are essentially blindly trusted, and the user is susceptible to a MITM attack.[ citation needed ]

Model strengths and weaknesses

The single largest strength of any TOFU-style model is that a human being must initially validate every interaction. A common application of this model is the use of ssh-rpc 'bot' users between computers, whereby public keys are distributed to a set of computers for automated access from centralized hosts. The TOFU aspect of this application forces a sysadmin (or other trusted user) to validate the remote server's identity upon first connection.

For end-to-end encrypted communication the TOFU model allows authenticated encryption without the complex procedure of obtaining a personal certificate which are vulnerable to CA Compromise. Compared to Web of Trust, TOFU has less maintenance overhead.

The largest weakness of TOFU that requires manual verification is its inability to scale for large groups or computer networks. The maintenance overhead of keeping track of identifiers for every endpoint can quickly scale beyond the capabilities of the users.

In environments where the authenticity of the identifier can not be verified easily enough (for example, the IT staff of workplace or educational facility might be hard to reach), the users tend to blindly trust the identifier of the opposing endpoint. Accidentally approved identifiers of attackers may also be hard to detect if the man-in-the-middle attack persists.

As a new endpoint always involves a new identifier, no warning about potential attack is displayed. This has caused misconception among users that it's safe to proceed without verifying the authenticity of the initial identifier, regardless of whether the identifier is presented to the user or not.

Warning fatigue has pushed many messaging applications to remove blocking warnings to prevent users from reverting to less secure applications that do not feature end-to-end encryption in the first place.

Out-of-sight identifier verification mechanisms reduce the likelihood that secure authentication practices are discovered and adopted by the users.

First known use of the term

The first known formal use of the term TOFU or TUFU was by CMU researchers Dan Wendlandt, David Andersen, and Adrian Perrig in their research paper "Perspectives: Improving SSH-Style Host Authentication With Multi-Path Probing" published in 2008 at the Usenix Annual Technical Conference. [5]

Moxie Marlinspike mentioned Perspectives and the term TOFU the DEF CON 18 proceedings, with reference to comments made by Dan Kaminsky, during the panel discussion "An Open Letter, A Call to Action". An audience suggestion was raised implying the superiority of the SSH Public key infrastructure (PKI) model, over the SSL/TLS PKI model - whereby Moxie replied:

Anonymous: "...so, if we dislike the certificate model in the (tls) PKI, but we like, say, the SSH PKI, which seems to work fairly well, basically the fundamental thing is: if I give my data to someone, I trust them with the data. So I should be remembering their certificate. If someone else comes in with a different certificate, signed by a different authority, I still don't trust them. And if we did it that way, then that would solve a lot of the problems- it would solve the problems of rogue CAs, to some extent, it wouldn't help you with the initial bootstrapping but the initial bootstrapping would use the initial model, and then for continued interaction with the site you would use the ssh model which would allow you continued strength beyond what we have now. So the model we have now can be continued to be re-used, for only the initial acceptance. So why don't we do this?"

Dan: "So, I'm a former SSH developer, and let me walk very quickly, every time there's an error in the ssh key generation, the user is asked, 'please type yes to trusting this new key', or, 'please go into your known hosts file and delete that value', and every last time they do it, because it's always the fault of a server misconfiguration. The SSH model is cool, it don't scale".

Moxie: "And I would just add, what you're talking about is called 'Trust on First Use', or 'tofu', and there's a project that I'm involved in called perspectives, that tries to leverage that to be less confusing than the pure SSH model, and I think it's a really great project and you should check it out if you're interested in alternatives to the CA system."

Prior work

The topics of trust, validation, non-repudiation are fundamental to all work in the field of cryptography and digital security.

See also

Related Research Articles

<span class="mw-page-title-main">HTTPS</span> Extension of the HTTP communications protocol to support TLS encryption

Hypertext Transfer Protocol Secure (HTTPS) is an extension of the Hypertext Transfer Protocol (HTTP). It uses encryption for secure communication over a computer network, and is widely used on the Internet. In HTTPS, the communication protocol is encrypted using Transport Layer Security (TLS) or, formerly, Secure Sockets Layer (SSL). The protocol is therefore also referred to as HTTP over TLS, or HTTP over SSL.

The Secure Shell Protocol (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network. Its most notable applications are remote login and command-line execution.

<span class="mw-page-title-main">Email client</span> Computer program used to access and manage a users email

An email client, email reader or, more formally, message user agent (MUA) or mail user agent is a computer program used to access and manage a user's email.

Transport Layer Security (TLS) is a cryptographic protocol designed to provide communications security over a computer network. The protocol is widely used in applications such as email, instant messaging, and voice over IP, but its use in securing HTTPS remains the most publicly visible.

<span class="mw-page-title-main">Public key infrastructure</span> System that can issue, distribute and verify digital certificates

A public key infrastructure (PKI) is a set of roles, policies, hardware, software and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. The purpose of a PKI is to facilitate the secure electronic transfer of information for a range of network activities such as e-commerce, internet banking and confidential email. It is required for activities where simple passwords are an inadequate authentication method and more rigorous proof is required to confirm the identity of the parties involved in the communication and to validate the information being transferred.

In cryptography, a public key certificate, also known as a digital certificate or identity certificate, is an electronic document used to prove the validity of a public key. The certificate includes the public key and information about it, information about the identity of its owner, and the digital signature of an entity that has verified the certificate's contents. If the device examining the certificate trusts the issuer and finds the signature to be a valid signature of that issuer, then it can use the included public key to communicate securely with the certificate's subject. In email encryption, code signing, and e-signature systems, a certificate's subject is typically a person or organization. However, in Transport Layer Security (TLS) a certificate's subject is typically a computer or other device, though TLS certificates may identify organizations or individuals in addition to their core role in identifying devices. TLS, sometimes called by its older name Secure Sockets Layer (SSL), is notable for being a part of HTTPS, a protocol for securely browsing the web.

In cryptography, X.509 is an International Telecommunication Union (ITU) standard defining the format of public key certificates. X.509 certificates are used in many Internet protocols, including TLS/SSL, which is the basis for HTTPS, the secure protocol for browsing the web. They are also used in offline applications, like electronic signatures.

<span class="mw-page-title-main">Web of trust</span> Mechanism for authenticating cryptographic keys

In cryptography, a web of trust is a concept used in PGP, GnuPG, and other OpenPGP-compatible systems to establish the authenticity of the binding between a public key and its owner. Its decentralized trust model is an alternative to the centralized trust model of a public key infrastructure (PKI), which relies exclusively on a certificate authority. As with computer networks, there are many independent webs of trust, and any user can be a part of, and a link between, multiple webs.

FTPS is an extension to the commonly used File Transfer Protocol (FTP) that adds support for the Transport Layer Security (TLS) and, formerly, the Secure Sockets Layer cryptographic protocols.

The Secure Remote Password protocol (SRP) is an augmented password-authenticated key exchange (PAKE) protocol, specifically designed to work around existing patents.

Extensible Authentication Protocol (EAP) is an authentication framework frequently used in network and internet connections. It is defined in RFC 3748, which made RFC 2284 obsolete, and is updated by RFC 5247. EAP is an authentication framework for providing the transport and usage of material and parameters generated by EAP methods. There are many methods defined by RFCs, and a number of vendor-specific methods and new proposals exist. EAP is not a wire protocol; instead it only defines the information from the interface and the formats. Each protocol that uses EAP defines a way to encapsulate by the user EAP messages within that protocol's messages.

Mutual authentication or two-way authentication refers to two parties authenticating each other at the same time in an authentication protocol. It is a default mode of authentication in some protocols and optional in others (TLS).

In cryptography, CRAM-MD5 is a challenge–response authentication mechanism (CRAM) based on the HMAC-MD5 algorithm. As one of the mechanisms supported by the Simple Authentication and Security Layer (SASL), it is often used in email software as part of SMTP Authentication and for the authentication of POP and IMAP users, as well as in applications implementing LDAP, XMPP, BEEP, and other protocols.

Opportunistic TLS refers to extensions in plain text communication protocols, which offer a way to upgrade a plain text connection to an encrypted connection instead of using a separate port for encrypted communication. Several protocols use a command named "STARTTLS" for this purpose. It is a form of opportunistic encryption and is primarily intended as a countermeasure to passive monitoring.

In public-key cryptography, a public key fingerprint is a short sequence of bytes used to identify a longer public key. Fingerprints are created by applying a cryptographic hash function to a public key. Since fingerprints are shorter than the keys they refer to, they can be used to simplify certain key management tasks. In Microsoft software, "thumbprint" is used instead of "fingerprint."


This is a comparison of notable free and open-source configuration management software, suitable for tasks like server configuration, orchestration and infrastructure as code typically performed by a system administrator.

DNS-based Authentication of Named Entities (DANE) is an Internet security protocol to allow X.509 digital certificates, commonly used for Transport Layer Security (TLS), to be bound to domain names using Domain Name System Security Extensions (DNSSEC).

In cryptography, the Salted Challenge Response Authentication Mechanism (SCRAM) is a family of modern, password-based challenge–response authentication mechanisms providing authentication of a user to a server. As it is specified for Simple Authentication and Security Layer (SASL), it can be used for password-based logins to services like LDAP, HTTP, SMTP, POP3, IMAP and JMAP (e-mail), XMPP (chat), or MongoDB and PostgreSQL (databases). For XMPP, supporting it is mandatory.

A software-defined perimeter (SDP), also called a "black cloud", is an approach to computer security which evolved from the work done at the Defense Information Systems Agency (DISA) under the Global Information Grid (GIG) Black Core Network initiative around 2007. Software-defined perimeter (SDP) framework was developed by the Cloud Security Alliance (CSA) to control access to resources based on identity. Connectivity in a Software Defined Perimeter is based on a need-to-know model, in which device posture and identity are verified before access to application infrastructure is granted. Application infrastructure is effectively “black”, without visible DNS information or IP addresses. The inventors of these systems claim that a Software Defined Perimeter mitigates the most common network-based attacks, including: server scanning, denial of service, SQL injection, operating system and application vulnerability exploits, man-in-the-middle, pass-the-hash, pass-the-ticket, and other attacks by unauthorized users.

Application Layer Transport Security (ALTS) is a Google-developed authentication and transport encryption system used for securing Remote Procedure Call (RPC) within Google machines. Google started its development in 2007, as a tailored modification of TLS.

References

  1. TOFU for OpenPGP, Walfield, Koch (EUROSEC’16, April 18–21, 2016)
  2. What is the secure/correct way of adding github.com to the known_hosts file? Retrieved November 19, 2020
  3. Daniel Gultsch (November 20, 2016). "Blind Trust Before Verification" . Retrieved January 19, 2017.
  4. "Safety number updates".
  5. "Perspectives: Improving SSH-style Host Authentication with Multi-Path Probing".