[tahoe-dev] One Grid to Rule Them All

Avi Freedman freedman at freedman.net
Tue Jul 2 12:14:25 UTC 2013


> > Dear Comrade Nathan,
> >
> > My first thought was that one could publish hashes of readcaps into DNS,
> > where the DNS response would be the introducer for the cluster.  But...
> > With the introducer furl I think they could upload as well as retrieve.
>
> Oh, I like this idea!
> 
> I believe it's true that introducer's have only a single furl and that
> grants access to both register a node as well as to retrieve the list of
> nodes.
> 
> Having two separate furls for registration versus lookup seems like low
> hanging fruit to me, although I wonder how that interacts with the
> accounting feature set.  This wouldn't prevent uploads, because the storage
> servers make that decision (and currently they always grant new uploads,
> AFAIK).

We are just starting to work with LAFS (with some support from the LA team)
and are not yet comfortable proposing architectural changes, though we'd
potentially be interested in helping implement anything that went towards
accounting and/or more segmentation and permissions.  

> So a complementary feature to that strategy is to make a
> "multi-introducer-aware" web interface, and perhaps to have "global caps"
> which include an introducer furl in the filesystem cap.  If there's also a
> "registration separate from lookup" two furl feature, then this new "global
> cap" scheme would only rely on the lookup furl (regardless of if a cap is
> read or read/write).

Sounds good.  I think LAFS could really use a lookup only vs full-use 
mechanism.

> As Leif mentioned, that's the goal of lafs-rpg (which is basically just an
> nginx configuration template).

lafs-rpg looks great for that; we'll play with and recommend that.  The 
overview doc does a great job of laying out privacy concerns with such
gateways.  We may take a stab at a more general user-population intro
and submit for feedback.

> Not at all.  The beauty of caps is that it's convenient to implement other
> access control mechanisms on top of them.  For example, a "Web Drive"
> product might be a thin layer between users and LAFS storage which maps
> their user credentials to LAFS caps internally to express particular access
> controls.

That's the beauty.  The danger is that many users will, I think, be confused
about how to store, protect, and back up caps.

Has there been work (maybe we missed it) to think about cap encryption and/or
cap storage on users' behalf?

> > Complexity could be added with having a DNS db of cap <-> cluster
> > public-facing web sever options.  If there was interest we could
> > build and run something like that, at least to the level of millions
> > of caps.  Doing so for billions+ would need some of the economic
> > incentives to which you were referring.
>
> One issue I see with this DNS service is that it's centrally administered
> (or else the utility is lessened).  I like how it could be very usable
> though, if done right.

Since DNS lends itself to both replication and decentralization, I think
there are a lot of schemes that could be implemented, and a reasonable first
stab may not be far off.  And it could potentially be kicked off reasonably
with a few independent but cooperating entities behind it (like LA and Havneco).
Lots of questions though since one easy way to do it would be to have each
participant take registrations via API and publish static versions of the 
zone tables every N minutes which the other participants would pick up and
serve.  But that would only work for really public content or assuming that
all participants are equally trustworthy from the perspective of all of their
clients.

Also...

We'll be at Defcon in early August and would be interested in hosting a
discussion around this, accounting, and some of the scalability/operability
discussions around LAFS.

Rio hotel rooms are pretty big or maybe we could get some scheduled space
there.

> > > Regards,
> > > Comrade Nathan
> > > Grid Universalist

Avi




More information about the tahoe-dev mailing list