[tahoe-dev] How Tahoe-LAFS fails to scale up and how to fix it (Re: Starvation amidst plenty)

Zooko O'Whielacronx zooko at zooko.com
Sat Oct 9 14:56:59 UTC 2010

On Thu, Oct 7, 2010 at 6:57 AM, Greg Troxel <gdt at ir.bbn.com> wrote:
>>             # TODO: this really shouldn't change anything. When we implement
>>             # a "minimal-bandwidth" repairer", change this test to assert:
>>             #self.failUnlessEqual(new_shares, initial_shares)
>>             # all shares should be in the same place as before
>>             self.failUnlessEqual(set(initial_shares.keys()), set(new_shares.keys()))
>>             # but they should all be at a newer seqnum. The IV will be
>>             # different, so the roothash will be too.
>> Okay, so it sounds like someone (Greg) ought to comment-in that
>> selfUnlessEqual() and change the following lines to say that you fail
>> if the seqnum is different. :-)
> I will try to take a look, but I am already not following with my
> limited available brain cycles.

It is a unit test which creates a mutable file, deletes a share of it,
runs repair, and then checks the state of the shares after the repair
has finished.

Currently this test allows the shares to have new sequence numbers. If
someone commented-in the "self.failUnlessEqual(new_shares,
initial_shares)", then the test would require the shares to be
identical to the original shares including the original sequence
numbers. It would be better if the mutable repair code could pass this
stricter test. (This is exactly what you've been asking for in your
bug reports on this issue.)

Therefore, you should comment-in that line so that the test becomes
stricter, and then when you do so the test will go red since the
current code can't do that. Then you can figure out how to fix it to
go green again. ;-)

Then we can close your bug report. :-)



More information about the tahoe-dev mailing list