devchat notes for 13-Dec-2016
zaki at manian.org
zaki at manian.org
Tue Dec 13 20:40:15 UTC 2016
#Improving the usability of the Docker containers
So there are basically two ways of configuring Docker containers
1. Environment Variables
2. Configuration files
I believe Tahoe is all yaml configuration files. Typically you overwrite
those configuration files with mounts from the host machines. It would be
nice to have the paths for config files that should be mounted from outside
the image in the documentation.
Depending on requirements you probably want a read/write mount where
persistent storage is as well from the image.
I have something in progress that feels like a potential Tahoe use case.
Here is the shape it.
You have a deployment of many on premise server applications. Each isolated
instance may need to create a shared scratch pad with many other servers.
They use a cloud service to obtain the scratchpad. The scratchpad should
only be accessible. The servers can choose which scratchpad to write to
based on what other servers need.
I'm looking at a couple of different options for this. Galois has a thing
tozny.com. There are messaging oriented crypto systems that might work like
But I do this Tahoe fits.
On Tue, Dec 13, 2016 at 12:16 PM Brian Warner <warner at lothar.com> wrote:
Tahoe-LAFS devchat 13-Dec-2016
* attendees: corbin, meejah, warner, exarkun, cypher
I tagged beta1 last night. The plan is to tag the final 1.12.0 release
Docker: We automatically generate a Docker image with each commit, which
makes it easier for folks (in Docker-capable environments) to run Tahoe
without compiling anything. However the current image tries to keep all
its NODEDIR state inside the container, which is not good Docker
practice (containers are ephemeral, so it's easy to lose your rootcaps
or stored shares). Exarkun will file some PRs to improve this, by
keeping the state on the host filesystem (mounted by, but not living
inside, the container).
He'll also take a look at our DockerHub presence
(https://hub.docker.com/r/tahoelafs/) and make sure we're providing
This might be aided by landing the PR for #2848, which adds arguments to
"tahoe create-client" that sets the shares.needed/required/happy in the
generated tahoe.cfg (as opposed to editing tahoe.cfg after node
creation). It's kind of last-minute, but the PR is pretty small, so I
think we can safely land it.
OS-X: our buildbot creates OS-X .dmg packages with each commit (see
put a binary in /usr/bin/tahoe (but maybe you need to be an admin to run
it?). The package includes a .app application (with icon and stuff), but
it doesn't actually do anything. So these "packages" aren't exactly
We're going to leave this builder in place for now and let it create a
1.12 package, but then we'll dismantle it after 1.12 is done and replace
it with cypher's "frozen binaries" tooling. He's got a buildbot
(https://buildbot.gridsync.io/waterfall) which generates single-file
executables for both OS-X and windows, which sounds like the preferred
way to distribute Tahoe until we get a full real GUI application (which
he is also working on). After 1.12 is done, we'll work to merge this
buildbot in with our main one (#2729), possibly taking over the worker
machines too (having the Tahoe org host them, instead of using Cypher's
personal machines, and/or using our Appveyor config to build some). Then
we'll distribute these executables on the main web site next to the
source tarballs. We might also manually generate executables for 1.12
and copy them into place.
Windows: We've got no packaging changes for Windows: I think we're only
offering "pip install" and some extra instructions. Post-1.12 we'll add
We need to remember to send the final release announcement to tor-dev,
or maybe tor-talk, to let the Tor community know of our new integration
features, and solicit feedback. We know of some Tor and I2P "community
grids", and we need to make sure their maintainers know about the
release, but they probably do already.
We noticed that GitHub automatically generates source-tree tarballs (via
"git archive"), and on other projects this sometimes causes confusion.
We declared the signed sdist tarballs/wheels on PyPI to be the
"official" release artifact, rather than GitHub's auto-tarballs. But our
release checklist will include copying the official tarballs to GitHub's
"releases" page, so anyone who sees the auto-tarball will also see the
(signed) real tarballs, to reduce confusion.
We talked about more "productized" deployments, catching Cypher and
Corbin up on discussions we had at the Summit in November. Cypher is
working on a deployable GUI as a Least Authority project, and Corbin is
building a commercial service around Tahoe, so both are really
interested in where we go with Accounting and node provisioning.
Some use cases we discussed:
* "enterprise" deployment: an admin decides on the storage servers
(local or cloud), pays for them, installs the server app. Later the
admin approves each new client and authorizes them to use the existing
servers. This wants enough Accounting for the admin to find abuse (or
cost-overruns) and enforce rough usage policies, but client machines
should not be directly paying for storage, and users should be unaware
of where the storage is held.
* "friendnet": group of friends share locally-hosted space with each
other. No central admin, no payment, but enough Accounting for each
server node to know who is using space, how much, and to be to push
back (notify or suspend service) if someone uses too much. Wants more
* paid grid: individual client pays someone else for storage, either in
dollars or a cryptocurrency. Storage might be hosted directly by
provider, or backed by a commodity/cloud service, but client only
interacts with the provider (both for config and payment).
Cypher's prototype uses a Magic-Wormhole -based provisioning flow:
clients launch the app, which asks them to get a wormhole code from
their admin. The payload delivered via this code provides the
introducer.furl and encoding settings. In the future, this could also
transfer pubkeys or certificates that authorize the new client to
consume space on the storage servers (which might be locally-hosted
machines, or cloud storage, but are under the control of the same
Corbin's work is likely to depend on a better Helper, to reduce cost and
improve performance. We currently only have a Helper for immutable
uploads, and it's been neglected for several years. In 2017 we hope to
give some love to the Helper, adding the immutable-download side, and
then their mutable counterparts.
One interesting question is how storage authority should be handled: in
one approach, all storage authority comes from the client, which
delegates some small portion (restricted to a specific file, for a
limited time) to the helper. In another approach, the Helper can pose as
anyone they like, but notifies storage servers of the account label that
should be applied to any shares being uploaded.
At the Summit we discussed a "full agoric" approach, in which clients
learn about servers from some sort of "yellowpages" directory, decide
individually which ones they like, establish accounts with them, deposit
some BTC to get started, and then upload shares. I still think that's a
cool thing, but most of the use cases we looked at today wouldn't use it
(they'd want more curated selection of storage servers, and in many of
them the payment is coming from a central admin rather than individual
tahoe-dev mailing list
tahoe-dev at tahoe-lafs.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the tahoe-dev