devchat notes 21-Mar-2017

Brian Warner warner at
Tue Mar 21 20:12:03 UTC 2017

Tahoe-LAFS devchat 21-Mar-2017

Attendees: warner, meejah, exarkun, daira, liz, simpson

* pull request review
 * 402: servers of happiness
  * summary: needs work, maybe have some regressions
  * spec:
  * make new classes new-style: turn "class Foo:" into "class
  * "task" abstraction (get_tasks) - why is it called that?
  * new version makes 2N queries before considering placement (old
    version was less chatty)
   * should look at performance: uploader will block until all 2N
     servers have responded, but at least they all run in parallel
   * both old and new uploader do sequential will-you-accept queries
     after initial do-you-have probe
  * maybe change ServerTracker to have a more read-only "do you have
    shares?" method
   * .query() is being abused to probe for existing shares without
     actually allocating new ones
   * use .ask_about_existing_shares() instead, just like the readonly
  * more clever/complicated approach: send initial placement queries
    assuming that there are no pre-existing shares
   * then if all N shares are placed without problems, and no
     pre-existing shares are found, done
   * but if any pre-existing shares are found, throw out placement,
     query 2N servers, run full happiness algorithm
  * what happens right now: if we can't contact at least N servers, we
    hang? what if we *can* contact N, but some additional server out of
    the 2N hangs?
  * _handle_existing_response is still updating self.preexisting_shares
    and self.homeless_shares: obsolete?
  * investigate trackerid/serverid/server/tracker stulff in
    _get_next_allocation: can we remove some of this indirection?
  * in _request_another_allocation, would be nice if
    servers_of_happiness() test used merged, instead of
   * or print get_allocations(), instead of using merge_servers
   * or change pretty_print_shnum_to_servers() to take two separate
     dicts, and avoid merge_servers()
   * and change self.log() to take structured messages instead of
     pre-rendered ones
  * looks like it doesn't loop (back to recomputing the placement) when
    some servers reject the upload request
   * only one pass through the tracker list, and nothing calls
     mark_full_peer() after the initial capacity check
   * so upload will still succeed if a server says no, but we'll have
     between H and N shares placed
   * ideally we'd find a new home for the extra shares, so we could
     still get all N placed
  * should make PeerSelector match terminology of the algorithm (e.g.
    full_peer -> readonly_peer)
  * find out what the split is between util/ vs
  * consider moving integration/ into a
    regular unittest
   * do we want tahoe to depend upon hypothesis? making it under the
     [test] extra seems fine, just like "mock"
  * look for what behavior changes are regressions

* wheels (for pycryptopp)
 * JP has a pycryptopp PR37 to make manylinux1 wheels
 * should land, then look at configuring travis to upload them
 * use flappclient
 * warner will merge PR
 * warner will point JP at the buildbot/tahoe code that does the upload,
   he will port to travis/pycryptopp

* warner: land pycryptopp PR38 (version fixes)
 * once landed, we should just move to pure Versioneer
 * src-pycryptopp/extraversion.h is modified, needs thought

* tahoe in a browser
 * could we do some sort of emscripten thing to get tahoe running in a
 * gary bernhardt's pycon2013 "life and death of javascript"
 * foolscap is still a blocker: need to replace with HTTP or websockets
 * also frontend API is a question: we aren't saving files to local
   disk, but could provide a Blob or a MediaStream or something

* remote control API
 * frontend java/javascript applet (android?) could drive it
 * maybe limit it to a single dircap, or addonly cap
 * real tahoe client node runs on a real computer, that you trust
 * frontend app runs on phone
 * sounds like WAPI2 (websocket-based)
  * except maybe provide revocable tokens to the frontend that
    encapsulate (storagecap + writecap), like how the current FTP
    frontend works

* auto_deps:
 * daira will file bug to stop checking pkg_resource versions and paths
 * warner will find the (debian?) bug describing the failure mode that
   prompted him to want to remove autodeps entirely

More information about the tahoe-dev mailing list