[tahoe-dev] cleversafe says: 3 Reasons Why Encryption isOverrated
jresch at cleversafe.com
Tue Aug 4 20:24:30 UTC 2009
Brian Warner wrote:
> Zooko Wilcox-O'Hearn wrote:
> > Cleversafe has posted a series of blog entries entitled "3 Reasons
> > Why Encryption is Overrated".
> The AONT is a neat bit of crypto, but it seems to me that it's merely
> moving the problem elsewhere, not avoiding it completely. From what I
> can see of the Cleversafe architecture (i.e. the picture on
> http://dev.cleversafe.org/weblog/?p=111), they've got a system in which
> someone who can retrieve "k" shares can reconstruct the plaintext, so
> all of the policy around who-gets-to-read-what must be enforced by the
> servers. They express this policy when they decide whether or not to
> honor a request for one of those shares. The security of your file
> depends upon enough (N-k+1) of them deciding to say "no" to the "wrong"
> people; the availability of your file depends upon at least k of them
> saying "yes" to the "right" people. The means by which those servers
> make that decision is not described in the picture, but is a vital part
> of the access-control scheme (and probably depends upon the same sort of
> public-key encryption that the blog entry disparages).
The exact authentication mechanism used is somewhat flexible, we use Generic Security Services API, (GSS-API) for authenticating to dispersed storage devices. GSS-API allows the technique to be pluggable and easily added, and even simultaneously support multiple protocols for authentication without impacting our protocol or architecture. We have implemented two such GSS-API mechanisms in addition to Kerberos which was already supported. Both mechanisms share much similarity the way authentication works in SSH: an ehpemeral Diffie-Hellman key exchange takes place to establish a session key, and then either a username/password or PKI mechanism is used to authenticate. Typically username/passwords are used only for biological users and the password is passed through to a manager machine to be verified. The manager machine also is responsible for publishing ACLs for "Vaults" which are logical groupings of data to which accounts may be given permissions. Machine-to-Machine authentication nearly always uses PKI, since machines are capable of remembering keys while humans cannot.
Using PKI for authentication does not carry the same risks that use it for protecting the confidentiality of data carries. Let us say tomorrow that some cryptographer announces a breakthrough in factoring integers. If using RSA keys for authentication one can suspend their use nearly immediately by adding all accounts to the CRL. If one had been using them to encrypt data at rest things would be much more dire as all data needs to be decrypted and re-encrypted with some new mechanism as fast as possible. In system like Tahoe which potentially use untrusted servers, the operator of those servers might use this opportunity to look at the data. In fact they could run off with it and never return or honor requests to re-encrypt the data and delete the old now vulnerable data. It is incorrect to characterize my words as disparaging towards public key cryptography. The goal of the post was to expose risks associated with traditional encryption systems and the vulnerability to advances in math and quantum computers are two such risks one should be aware of when relying on public key cryptography for long-term data security.
See these links for further details:
> I'd want to know more about how this undocumented access-control scheme
> works before I was willing to believe that the Cleversafe approach
> results in "fewer headaches". It is clear that the access-control policy
> can be changed without re-encrypting the original data, but depending
> upon how access is granted, it may be an even bigger hassle to change
> the access keys. I suspect there is some notion of a "user", represented
> by a private key, and that each server has a typical (user * file) ACL
> matrix, and that some superuser retains the ability to manipulate that
> matrix. The object-capability community has shown
> (http://waterken.sourceforge.net/aclsdont/) that this approach breaks
> down when trying to make sharing decisions among multiple users, and
> quickly becomes vulnerable to things like the Confused Deputy attack.
You are mostly right but access to dispersed data is not given in terms of files. Instead data is logically partitioned into different vaults which may represent a block device, hierarchically organized filesystem-like data, or a flat namespace of unorganized data. ACLs in our system are entirely vault-centric, that is each vault will have 0 or more users which have permissions to read, write, delete, etc. data in that vault. One or more vault-administrators can manipulate what permission users have to different vaults. I am not sure how this approach is vulnerable to a confused deputy attack, the servers storing slices enforce Vault ACLs with every message received from clients. Let us say there is a vault that two users have access to, could you describe how a confused deputy attack would take place in this scenario?
> Another possibility is that there is some master node which makes the
> per-user access-control decisions, and all servers honor its requests.
> In this case the security of the files depends upon both the servers
> being able to distinguish the master's requests from an attacker's, and
> upon the master being able to distinguish one user's requests from
> another's. The issues with ACLs still apply, and both availability and
> security depend upon the master (which is now a single point of
This master-based system you describe is more accurate. In a dispersed storage network there is a manager machine which publishes configuration information, which we call a Registry, for the dispersed storage network. The storage nodes in the dispersed storage network can distinguish valid configuration information because it is published via LDAP over SSL. You say that the security of the files depends on the master distinguishing requests from an attacker, but that is not the case for our system. The information published by the manager is for all intents and purposes public information, and it would not matter if an attacker gained access to it. The availability of the manager machine is only required to make changes to configuration, as the other nodes cache the published information. While a compromised the manager could give out invalid ACLs, such a compromised state is more easily recovered from than a situation in which data encryption keys are compromised. We also have a design in place for eliminating the harm resulting from this type of attack but unfortunately I cannot reveal it at this time.
> In Tahoe, we made a deep design decision very early: the security of the
> system should not depend upon the behavior of the servers. We made that
> choice so that users could take advantage of storage space on a wider
> variety of servers, including ones that you don't trust to not peek
> inside your shares.
I think the goals and intended use of Tahoe differ slightly from those of Cleversafe. Whereas Tahoe is geared to allow a service for anyone to store data, Cleversafe builds storage systems that are typically owned and operated by one organization. I think these differences in intended purpose drove our design decisions in different directions, with neither being necessarily better than the other, but each being the best for its intended purpose. For example, when using a service the user of that service typically wants full control over the confidentiality of the data they store but in an organization, typically it is the organization that wants control over the data stored, and therefore a system where the end-user can leave the organization and leave the data in an irretrievable state would be undesirable.
> Tahoe shares (and the ciphertext of every file) are
> considered to be public knowledge. This fact improves system-wide
> reliability by enabling the use of additional tools without affecting
> any security properties:
> * servers can independently copy shares to new servers when they know
> they're going to leave the grid
That is useful. How does Tahoe prevent a departing server from spoofing the shares they really own and create many corrupt shares across the grid?
> * pairs of servers can be "buddies" for a single share (and challenge
> each other with integrity checks at random intervals)
Are these challenges like "What is the hash of byte i to j for share x"? What happens if a server simply takes forever to respond to such a request or ignores it? If one buddy reports another for not passing the test, what prevents a server from lying about a server not passing a test? How can you tell which one is lying of one buddy simultaneously corrupts his data and reports the other for not having the right responses to the challenges?
> * arbitrary parties can perform repair functions and generate new
> shares without seeing the plaintext
We have discovered that this is possible (rebuilding data without seeing the plaintext) even when only the AONT+Dispersal are used. Again I am unable to describe how this works at this time.
> * "helpers" can safely perform the erasure coding/decoding for you, to
> offload bandwidth and CPU effort to a more suitable machine
> * third-party "relay servers" can transport and hold shares (e.g. for
> the benefit of servers stuck behind NAT boxes)
> * users can upload shares to anyone that seems useful: the only
> consequence is the bandwidth cost
The other cost is that because the reliability or availability is questionable, one needs to be extra cautious and have a n be much greater than k in the k-of-n threshold. I read in the documentation that 3-of-10 is a typical configuration for Tahoe. This creates a large storage overhead, roughly equivalent to making 3 copies. In Cleversafe's paradigm, since the servers are all trusted to an extent to be highly available and reliable, we can have k much closer to n, for example a 10-of-16 is a common configuration for us. This allows us to get by with half the storage and bandwidth requirements.
> The actual security properties we get in Tahoe are:
> * a small number of misbehaving servers can do absolutely nothing to
> hurt you
> * a larger number of misbehaving servers can cause lazy readers (those
> who do not do a thorough check) to get old versions of mutable files,
> known as a rollback attack
> * a very large number of misbehaving servers can cause unavailability
> of files, and rollback attacks even against thorough readers
> where "very large" means N-k or more, and "small" means less than about
> 2*k (this depends upon the reader: they make a tradeoff between
> bandwidth expended and vulnerability to rollback). Also, a reader who is
> somehow able to remember the latest sequence number will never be
> vulnerable to the rollback attack. And of course the rollback attack is
> not even applicable to immutable files, which have only a single
Interesting, for us "very large" as you defined it is usually less than than "small" as you defined it. For our 10-of-16 system, N-k = 6, while 2*k = 12. Perhaps this difference in typical k and N values influenced your evaluation of our confidentiality properties. Because k is close to N in our 10-of-16 configuration, a super majority (62.5%) must be compromised or incorrectly give out slices to lose confidentiality, as opposed to a small subset as in the case of 3-of-10.
> Note that confidentiality and integrity (the lack of undetected
> bitflips) are guaranteed even if every single server is colluding
> against you, assuming the crypto primitives remain unbroken. We decided
> that this was the most important property to achieve. Anything less
> means you're vulnerable to server behavior, either that of a single
> server or of a colluding group. The Sybil attack demonstrates that there
> is no way to prove that your servers are independent, so it is hard to
> have confidence in any property that depends upon non-collusion.
> (Also, if all of your servers are managed by the same company, then by
> definition they're colluding. We wanted to give allmydata.com users the
> ability to not share their file's plaintext with allmydata.com)
> By treating the ciphertext of any given file to be public knowledge, we
> concentrate all of the security of the system into the encryption key.
> This drastically simplifies the access-control policy, which can be
> stated in one sentence: you can read a file's contents if and only if
> you know or can obtain its readcap. There is no superuser who gets to
> make access-control decisions, no ACL matrices to be updated or
> consulted, no deputies to be confused, and sharing files is as easy as
> sharing the readcap.
Once a readcap is out there, however, it is impossible to control, or know how they might be shared when you give them to someone else. Of course the same is true for any particular plaintext that is shared but by making the problem of security a small one (as you describe below) it makes it possible for information to leak much more quickly to a larger number of people.
> Finally, in all of this, it's important to be clear about the
> differences between mutable and immutable files, and the differences
> between symmetric and asymmetric encryption. Much of the Cleversafe blog
> posts talk about "encryption" being bad and the AONT being the good
> alternative, but of course the AONT that they depend upon is based on
> symmetric AES-256. I think they're really trying to say that asymmetric
> encryption (RSA/ECDSA) is threatened by quantum computing, and that
> per-file keys are either weak or hard to manage. Any notion of mutable
> versus immutable files must be expressed in the (undocumented) server
> access control mechanisms.
We have a vault type in which updates are not supported. Again this property is on the vault level, not per file, but using it seems like it would yield similar properties to using immutable files in Tahoe.
> In Tahoe, our immutable files use AES and SHA256d, and if you have a
> secure place to store the filecaps, you don't need anything else. (this
> layer of Tahoe can be construed as a secure DHT with filecaps as keys
> and file plaintext as values). As the saying goes, cryptography is a
> means of turning a large security problem into a small security problem:
> you still need to keep the filecaps safe, but they're a lot smaller than
> the original files.
How are filecaps usually kept safe by most users of Tahoe? By safe do you mean both reliably stored and confidential, or just one of the two?
> If you want to keep track of fewer things, you can store those filecaps
> in mutable directories, which are based upon mutable files that use AES,
> SHA256d, and RSA2048. The use of an asymmetric algorithm like RSA makes
> them vulnerable to more sorts of attacks, but their mutability makes
> them far more flexible, allowing you to keep track of an arbitrary
> number of changing files with a single 256ish-bit cryptovalue.
Thanks for your insightful post, I learned a lot from it. I hope that this response explains enough about our approach to answer the questions you had.
More information about the tahoe-dev