devchat notes from 21-Feb-2017

Brian Warner warner at lothar.com
Tue Feb 21 20:24:05 UTC 2017


Tahoe-LAFS devchat 21-Feb-2017

Attendees: warner, meejah, exarkun, dawuud, liz, cypher, daira

* magic-wormhole state machines, using Automat
* IFF (liz, cypher, warner): liz will be having a user-engagement
  discussion tomorrow, we should get together later in the week to talk
  about it
  * 7-min presentation as part of UX session
  * may also present at a tool session
* 1382 servers-of-happiness: there's a PR (402) ready to go, passes all
  tests
* other PRs that should be ready:
  * #375 (status): minor coverage problems
  * #379 (no-Referrer header) [landed]
  * #380: just documentation
  * close #365 (obsoleted by 402)
  * close #131 (obsoleted by 380)
  * land 399 (json welcome page)
  * land 400 [landed]
  * daira will look at 401 (rearranging inotify tests)
  * close 396 (list-aliases --readonly-uri) or 400 (one seems obsolete)
    (meejah closed 396)
  * clean up 226 (whitespace, argument names), then land
* fixing twisted deprecations (twisted.web.client.getPage, mostly in
  tests)
* I2P vs foolscap
  * warner and exarkun should dive into it
* sshfs vs tahoe
  * that bug on IRC, zero-length file
  * debug process: first make sure tahoe works, then use an SFTP client.
    only then use sshfs (with debug options)
  * sshfs tends to ignore close() errors
  * tahoe hangs are not good at triggering errors
* removing _auto_deps.py
  * for now: "tahoe --version": just show tahoe version, not anything
    else
  * "tahoe --version-and-path": do full auto_deps double-checks, show
    all deps versions too
* rainhill
  * next step is probably to refactor tahoe's existing
    uploader/downloader into Encoders that accept/produce streams
  * want to maintain the don't-write-shares-to-disk property: so output
    is a stream, not a filehandle or bytes
  * also need to update the diagrams, according to our Summit notes
* Accounting
  * ideally want a backend-appropriate way to store the leases
  * local disk for shares plus local disk for sqlite is consistent
  * S3 for shares but local disk for sqlite is not so much
  * when local copy of sqlite db is lost:
    * could do an immediate full enumeration of shares
    * or only check lazily: if/when someone asks for a share, you check
      S3 for it, if not present in DB, update DB and add a starter lease
    * or something inbetween
    * maybe monthly crawl
  * some backends might provide fast enumeration of shares ("ls", get
    filenames and sizes and timestamps)
    * so crawler might be fast/cheap
  * can do both gc and discovery of lost shares with a single crawler,
    roughly once a month
    * if it finds a backend share without a DB entry, it adds a starter
      lease
    * if there is a DB entry, but it has no leases, we delete the
      backend share



More information about the tahoe-dev mailing list