disk crashes provoked by tahoe-lafs?
justin.h.stottlemyer at gmail.com
Tue Apr 15 02:50:23 UTC 2014
Treatment of the drives can affect their behavior and failure modes. I have over 50k spindles and make effort to keep them in good behavior mode.
It's easy to make drives die early by running them beyond spec. (Not necessarily generally available data).
It also depends on drive type as well as age. Drives do 'age' out.
In short, this is very possible.
Typos by iPhone
On Apr 14, 2014, at 6:58 PM, Daira Hopwood <daira at jacaranda.org> wrote:
> On 15/04/14 01:34, Greg Troxel wrote:
>> I run tahoe servers on 4 systems in a private grid. The grid is not
>> used much, but I run deep-check --repair --add-lease every week or so,
>> when I remember. The nodes all have lease expiration turned on, but are
>> quite unfull. All are running NetBSD, some -5, some -6. I do not have
>> these filesystems mounted noatime.
>> 3 of these nodes are physical machines, with 3 kinds of disks. All have
>> turned up bad blocks. The 4th is a Xen domU; the dom0 is RAID-1. One
>> of the dom0's disks also had some bad blocks. This is a notable
>> cluster; disk failures are otherwise relatively rare.
>> So I really wonder if the lease checking code, or something, is churning
>> the disk.
>> Has anyone else seen this?
> 'tahoe deep-check --repair --add-lease' will write to the header of
> every share.
> Daira Hopwood ⚥
> tahoe-dev mailing list
> tahoe-dev at tahoe-lafs.org
More information about the tahoe-dev