[tahoe-dev] Removing the dependency of immutable read caps on UEB computation

Shawn Willden shawn at willden.org
Sun Oct 4 15:45:28 UTC 2009

On Saturday 03 October 2009 01:26:16 am Brian Warner wrote:
> Incidentally, we removed 1 and 2 forever ago, to squash the
> partial-information-guessing-attack.

Makes sense.  The diagrams in the docs should be updated.

> > To address these issues, I propose splitting the UEB into two parts
> Interesting. As you point out, I'm not sure I like the introduction of
> an extra layer of caps (and an asymmetric key) into the immutable file
> scheme. It raises the question: who should hold onto these caps? Where
> should they put them? I suppose the original uploader of the file is the
> special party who then has the ability to re-encode it, but they'll have
> to store it somewhere, and it feels wasteful to put an extra layer of
> caps in the dirnodes (along with the writecap, readcap, and
> traversalcap) just to track an object that so few people will actually
> be able to use.

Since this is for immutable files, there is currently no writecap or 
traversalcap, just a readcap and perhaps a verifycap.  This scheme would 
require either adding either a share-update cap or providing a master cap 
(from which share-update and read caps could be computed).

> Adding an asymmetric key might also introduce some new attack vectors.
> If I give you a readcap and claim that it points to a certain contract,
> and you sign that readcap to sign the contract, can I pull any tricks by
> also holding on to this newly-introduced signing key? I guess if the
> readcap covers UEB1, then I can't forge a document or cause you to sign
> something else, but I can produce shares that will look completely valid
> during fetch and decode but then fail the ciphertext check. That means I
> can make it awfully hard to actually download the document (since
> without an effective share hash, you can't know which were the bad
> shares, so you can try using other ones).

This would allow the original uploader to do a sort of DoS attack on the file, 
but not to modify the contents.  If the original shares (the ones I used when 
I decided to sign the cap) are still in the grid, I could still retrieve the 
original version, but it might be more difficult.  If the original shares had 
expired and been removed from the storage servers, the original uploader 
could ensure that all extant shares are garbage.

> And if you're changing 'k', then you'll certainly need to replace
> all the existing shares. So the goal appears to be to do all the work of
> uploading a new copy of the file, but allow the old caps to start
> referencing the new version.

Yes.  Obviously, you could also change 'N' without changing 'k' -- something 
that might be possible with a sort of extended share hash tree anyway, but is 
not currently possible.  And even with an extended share hash tree, you 
couldn't extend 'N' beyond whatever extra shares were computed during initial 
upload.  Breaking the link between encoding choices and cap would allow 
arbitrary re-encoding -- and perhaps even completely different 

> Deriving the filecap without performing FEC doesn't feel like a huge win
> to me.. it's just a performance difference in testing for convergence,
> right?

No, it's more than that.  It allows you to produce and store caps for files 
that haven't been uploaded to the grid yet.  You can make a "this is where 
the file will be if it ever gets added" cap.  Also, it would be possible to 
do it without the actual file contents, just the right hashes, which can make 
a huge performance difference in testing for convergence if the actual file 
doesn't have to be delivered to the Tahoe node doing the testing.

> I *am* intrigued by the idea of immutable files being just locked-down
> variants of mutable files.

I think there's a lot of value in that.


More information about the tahoe-dev mailing list