August 5, 2009

Direct Anonymous Attestation (DAA)

Posted in Uncategorized tagged at 16:38 by Thomas Groß

Direct Anonymous Attestation (DAA) allows a user to convince a verifier that she uses a platform that has embedded a certified hardware module. The protocol protects the user’s privacy: if she talks to the same verifier twice, the verifier is not able to tell whether or not he communicates with the same user as before or with a different one.

This scenario arose in the context of the Trusted Computing Group (TCG). TCG is an industry standardization body that aims to develop and promote an open industry standard for trusted computing hardware and software building blocks to enable more secure data storage, online business practices, and online commerce transactions while protecting privacy and individual rights.

We have worked with TCG and various privacy groups on the requirements of such a scheme and have developed an efficient protocol, called direct anonymous attestation protocol. The scenario is reminiscent of group signatures schemes. In fact, our protocol is based on the-state-of the art group signature scheme. However, a number of research questions had still to be solved for the protocol to be applied in practice. Direct anonymous attestation relies on the Decisional Diffie-Hellman assumption the user’s privacy and on the Strong RSA assumption for security. The protocol has been standardized in the TCG’s TPM version 1.2. Chips implementing the protocols are currently being build and the infrastructure around the protocol is being defined. A paper [1] describing the protocol did appear at ACM CCS 04 and a paper [2] describing how to use the protocol in the most privacy-friendly has been presented at ESORICS 2004.

Identity Mixer has been the basis of the DAA protocol.


The Trusted Computing Group (TCG) was facing the following problem. Consider a (trusted) hardware module (TPM) that is integrated into a platform such as a laptop or a mobile phone. Assume that the user of such a platform communicates with a verifier who wants to be assured that the user indeed uses a platform containing such a TPM, i.e., the verifier wants the TPM to authenticate itself. However, the user wants her privacy protected and therefore requires that the verifier only learns that she uses a TPM but not which particular one — otherwise all her transactions would become linkable to each other.

In principle, this privacy problem could be solved using any standard public key authentication scheme: One would generate a single secret/public key pair, and then embed the same secret key into each TPM. The verifier and the TPM would then run the authentication protocol. Because all TPMs use the same key, they are indistinguishable. However, this approach would never work in practice: as soon as one hardware module (TPM) gets compromised and the secret key extracted and published, verifiers can no longer distinguish between real TPMs and fake ones. Therefore, detection of rogue TPMs needs to be a further requirement.

The solution first developed by TCG uses a trusted third party, the so-called privacy certification authority (Privacy CA), and works as follows. Each TPM generates an RSA key pair called Endorsement Key (EK). The Privacy CA is assumed to know the public parts of the Endorsement Keys of all (valid) TPMs. Now, whenever a TPM needs to authenticate itself to a verifier, it generates a second RSA key pair, called an Attestation Identity Key (AIK), sends the AIK public key to the Privacy CA, and authenticates this public key w.r.t. its EK. If the Privacy CA finds this EK in its list, it issues a certificate on the TPM’s AIK. The TPM can then provide this certificate to the verifier and authenticate itself w.r.t. the AIK. In this solution, there are two possibilities to detect a rogue TPM: 1) If the EK secret key was extracted from a TPM, distributed, and then detected and announced as a rogue secret key, the Privacy CA can compute the corresponding public key and remove it from its list of valid Endorsement Keys. 2) If the Privacy CA gets many requests that are authorized using the same Endorsement Key, it might want to reject these requests. The exact threshold on requests that are allowed before a TPM is tagged rogue depends of course on the actual environment and applications, and will in practise probably be determined by some risk-management policy.

This solutions has the obvious drawback that the Privacy CA needs to be involved in every transaction and thus highly available on the one hand, but still as secure as an ordinary certification authority that normally operates off-line, on the other hand. Moreover, if the Privacy CA and the verifier collude, or the Privacy CA’s transaction records are revealed to the verifier by some other means, the verifier will still be able to uniquely identify a TPM. Because of this, the Privacy CA solution heavily criticized by most privacy groups and data protection commissioners. While the latter problem could be solved by using blind signatures for the private CA’s certification, the first problem, the private CA’s availability, would persist in such a solution.

Jointly with Intel and HP, we have developed an efficient protocol, called direct anonymous attestation, that provides a solution to this problem [1]. We have talked to number privacy groups and have worked with TCG to adopt it [3]. Direct anonymous attestation is based on the Ateniese et al. group signature scheme and on the Camenisch-Lysyanskaya anonymous credential system. In fact, our scheme can be seen as a group signature scheme without the capability to open signatures (or anonymity revocation) but with a mechanism to detect rogue members (TPMs in our case). More precisely, we also employ a suitable signature scheme to issue certificates on a membership public key generated by a TPM. Then, to authenticate as a group member, or valid TPM, a TPM proves that it possesses a certificate on a public key for which it also knows the secret key. To allow a verifier to detect rogue TPMs, the TPM is further required to reveal and prove correct of a value Nv = zeta^f, where f is its secret key and zeta is a generator of an algebraic group where computing discrete logarithms is infeasible. However, the schemes our protocol draws from suffer from two problems that we had to solve for direct anonymous attestation to be truly practical. The first one is that, to guarantee privacy for the users, the issuer of certificates needs to prove that its RSA modulus is a product of two *safe* primes. While it is known that such a statement can be proved, no truly practical way of doing so is known. The second one is that the RSA modulus n needed to be chosen rather large for the discrete logarithm problem modulo n to be hard for the issuer as well. We have solved the first one by a novel way for the issuer to prove all values produced by him lie in the correct subgroup (of the RSA group) and the second one by using an additional algebraic group modulo a prime.

At IBM Zurich Research Lab, we have also proposed a way to use the protocol in the most privacy friendly way [2]. That is, if direct anonymous attestation is used such that detection of rogue TPMs and granting/requesting access to a resource are done in the same transactions, a service provider can profile a TPM’s user. As such profiling is privacy invasive, we have proposed a way to use direct anonymous attestation such that the detection of rogue TPMs and actually granting/requesting the access are performed in two different unlinkable transactions, providing the highest possible degree of privacy to users. While TCG does not standardize the way its technology is used, we have worked (and keep working) with TCG to put forth best practices of how to use its technology and, in particular, direct anonymous attestion.


%d bloggers like this: