Patrick R McDonald marlowe at
Tue Dec 20 01:30:16 UTC 2016

Tahoe-LAFS Weekly News, issue number 71, December 20 2016

Welcome to the Tahoe-LAFS Weekly News (TWN).  Tahoe-LAFS_ is a secure,
distributed storage system. `View TWN on the web`_ *or* `subscribe to
If you would like to view the "new and improved" TWN, complete with pictures;
please take a `look`_.

.. _Tahoe-LAFS:
.. _View TWN on the web:
.. _subscribe to TWN:
.. _look:

ANNOUNCING Tahoe, the Least-Authority File Store, v1.12.0

"On behalf of the entire team, I'm pleased to announce the 1.12.0 release
of Tahoe-LAFS.

Tahoe-LAFS is a reliable encrypted decentralized storage system, with
"provider independent security", meaning that not even the operators of
your storage servers can read or alter your data without your consent.
See for a
one-page explanation of its unique security and fault-tolerance

With Tahoe-LAFS, you distribute your data across multiple servers. Even
if some of the servers fail or are taken over by an attacker, the entire
file store continues to function correctly, preserving your privacy and
security. You can easily share specific files and directories with other

The 1.12.0 code is available from the usual places:

* pip install tahoe-lafs
  * tag: "tahoe-lafs-1.12.0"
  * commit SHA1: 0cea91d73706e20dddad13233123375ceeaa7f0a
* (and SHA256 hashes)

  * tahoe-lafs-1.12.0.tar.bz2

  * tahoe-lafs-1.12.0.tar.gz


  * tahoe_lafs-1.12.0-py2-none-any.whl

  * detached GPG signatures (.asc) are present for each file

All tarballs, and the Git release tag, are signed by the Tahoe-LAFS
Release Signing Key (fingerprint E34E 62D0 6D0E 69CF CA41 79FF BDE0 D31D
6866 6A7A), available for download from

Full installation instructions are available at:

1.12.0 improves Tor/I2P support, enables multiple introducers (or no
introducers), allows static server definitions, and adds "Magic
Folders", an experimental two-way directory-synchronization tool. It
removes some little-used features like the "key-generator" node and the
old v1 introducer protocol (v2 has been available since 1.10). Many
smaller fixes and changes were made: see the NEWS file for details:

Many thanks to Least Authority Enterprises for sponsoring developer time
and contributing of the new Magic Folders feature.

This is the sixteenth release of Tahoe-LAFS to be created solely as a
labor of love by volunteers. Thank you very much to the team of "hackers
in the public interest" who make Tahoe-LAFS possible. Contributors are
always welcome to join us at and .

Brian Warner
on behalf of the Tahoe-LAFS team

December 17, 2016
San Francisco, California, USA"

Mailing List


Tuesday Dec 13, 2016

Tahoe-LAFS devchat 13-Dec-2016

* attendees: corbin, meejah, warner, exarkun, cypher

Release stuff

I tagged beta1 last night. The plan is to tag the final 1.12.0 release
next weekend.

Docker: We automatically generate a Docker image with each commit, which
makes it easier for folks (in Docker-capable environments) to run Tahoe
without compiling anything. However the current image tries to keep all
its NODEDIR state inside the container, which is not good Docker
practice (containers are ephemeral, so it's easy to lose your rootcaps
or stored shares). Exarkun will file some PRs to improve this, by
keeping the state on the host filesystem (mounted by, but not living
inside, the container).

He'll also take a look at our DockerHub presence
( and make sure we're providing
something useful.

This might be aided by landing the PR for #2848, which adds arguments to
"tahoe create-client" that sets the shares.needed/required/happy in the
generated tahoe.cfg (as opposed to editing tahoe.cfg after node
creation). It's kind of last-minute, but the PR is pretty small, so I
think we can safely land it.

OS-X: our buildbot creates OS-X .dmg packages with each commit (see, which
put a binary in /usr/bin/tahoe (but maybe you need to be an admin to run
it?). The package includes a .app application (with icon and stuff), but
it doesn't actually do anything. So these "packages" aren't exactly

We're going to leave this builder in place for now and let it create a
1.12 package, but then we'll dismantle it after 1.12 is done and replace
it with cypher's "frozen binaries" tooling. He's got a buildbot
( which generates single-file
executables for both OS-X and windows, which sounds like the preferred
way to distribute Tahoe until we get a full real GUI application (which
he is also working on). After 1.12 is done, we'll work to merge this
buildbot in with our main one (#2729), possibly taking over the worker
machines too (having the Tahoe org host them, instead of using Cypher's
personal machines, and/or using our Appveyor config to build some). Then
we'll distribute these executables on the main web site next to the
source tarballs. We might also manually generate executables for 1.12
and copy them into place.

Windows: We've got no packaging changes for Windows: I think we're only
offering "pip install" and some extra instructions. Post-1.12 we'll add
frozen binaries.

We need to remember to send the final release announcement to tor-dev,
or maybe tor-talk, to let the Tor community know of our new integration
features, and solicit feedback. We know of some Tor and I2P "community
grids", and we need to make sure their maintainers know about the
release, but they probably do already.

We noticed that GitHub automatically generates source-tree tarballs (via
"git archive"), and on other projects this sometimes causes confusion.
We declared the signed sdist tarballs/wheels on PyPI to be the
"official" release artifact, rather than GitHub's auto-tarballs. But our
release checklist will include copying the official tarballs to GitHub's
"releases" page, so anyone who sees the auto-tarball will also see the
(signed) real tarballs, to reduce confusion.


We talked about more "productized" deployments, catching Cypher and
Corbin up on discussions we had at the Summit in November. Cypher is
working on a deployable GUI as a Least Authority project, and Corbin is
building a commercial service around Tahoe, so both are really
interested in where we go with Accounting and node provisioning.

Some use cases we discussed:

* "enterprise" deployment: an admin decides on the storage servers
  (local or cloud), pays for them, installs the server app. Later the
  admin approves each new client and authorizes them to use the existing
  servers. This wants enough Accounting for the admin to find abuse (or
  cost-overruns) and enforce rough usage policies, but client machines
  should not be directly paying for storage, and users should be unaware
  of where the storage is held.

* "friendnet": group of friends share locally-hosted space with each
  other. No central admin, no payment, but enough Accounting for each
  server node to know who is using space, how much, and to be to push
  back (notify or suspend service) if someone uses too much. Wants more
  "social" tools.

* paid grid: individual client pays someone else for storage, either in
  dollars or a cryptocurrency. Storage might be hosted directly by
  provider, or backed by a commodity/cloud service, but client only
  interacts with the provider (both for config and payment).

Cypher's prototype uses a Magic-Wormhole -based provisioning flow:
clients launch the app, which asks them to get a wormhole code from
their admin. The payload delivered via this code provides the
introducer.furl and encoding settings. In the future, this could also
transfer pubkeys or certificates that authorize the new client to
consume space on the storage servers (which might be locally-hosted
machines, or cloud storage, but are under the control of the same

Corbin's work is likely to depend on a better Helper, to reduce cost and
improve performance. We currently only have a Helper for immutable
uploads, and it's been neglected for several years. In 2017 we hope to
give some love to the Helper, adding the immutable-download side, and
then their mutable counterparts.

One interesting question is how storage authority should be handled: in
one approach, all storage authority comes from the client, which
delegates some small portion (restricted to a specific file, for a
limited time) to the helper. In another approach, the Helper can pose as
anyone they like, but notifies storage servers of the account label that
should be applied to any shares being uploaded.

At the Summit we discussed a "full agoric" approach, in which clients
learn about servers from some sort of "yellowpages" directory, decide
individually which ones they like, establish accounts with them, deposit
some BTC to get started, and then upload shares. I still think that's a
cool thing, but most of the use cases we looked at today wouldn't use it
(they'd want more curated selection of storage servers, and in many of
them the payment is coming from a central admin rather than individual



The Tahoe-LAFS Weekly News is published once a week by The Tahoe-LAFS Software
Foundation, President and Treasurer: Peter Secor |peter|. Scribes: Patrick
"marlowe" McDonald |marlowe|, Zooko Wilcox-O'Hearn , Editor Emeritus:

Send your news stories to `marlowe at`_ - submission deadline:
Monday night.

.. _`marlowe at`: mailto:marlowe at
.. |peter| image:: psecor.jpg
   :height: 35
   :alt: peter
.. |marlowe| image:: marlowe-x75-bw.jpg
   :height: 35
   :alt: marlowe
.. |zooko| image:: zooko.png
   :height: 35
   :alt: zooko

More information about the tahoe-dev mailing list