[tahoe-dev] Tahoe performance
warner-tahoe at allmydata.com
Tue Feb 10 02:58:16 UTC 2009
On Sat, 07 Feb 2009 17:13:18 +0100
Francois Deppierraz <francois at ctrlaltdel.ch> wrote:
> During my tests on the allmydata.com grid I experienced a maximum
> throughput of 1 Mbps from Switzerland even thought my uplink is 5 times
> faster than that.
Yeah, this is disappointing. Our automated speed tests  show an upload
speed between 0.8MBps and 1.4MBps, using 100MBps local bandwidth, which is
much slower than we'd like. The upload speed used to be closer to 2.0MBps
(and the download speed was nearly 5MBps) a year ago, before we reduced the
maximum segment size to 128KiB (to improve alacrity). This hurts throughput
because we aren't yet using a windowing protocol to upload shares, so we burn
a round-trip time for each segment, so smaller segments means more time
The plan has been to make the upload and download processes smarter, and use
a windowing protocol that will be less sensitive to RTT. I believe that
should improve the speeds beyond their values from a year ago. But we haven't
had the time to do that yet.
It would probably be a good idea for us to run those speed tests with an
elevated MAX_SEGMENT_SIZE (which should be as easy as adding a line to
tahoe.cfg), to see what happens.
Another factor affecting the allmydata.com grid is the behavior of the
helper. When a client machine has finished sending ciphertext to the helper,
the helper basically runs flat out (CPU bound) pushing the shares out to the
servers. So having more/faster helpers would improve throughput, at
least in the aggregate. We're planning to improve the way that helpers
get configured and allocated (#284, or alternatively to announce helpers
through the introducer), which should help with the load-balancing, and
would make it easier for us to enable more of them on the prodnet.
More information about the tahoe-dev