TC+C report 19-Sep-2014

Brian Warner warner at lothar.com
Fri Sep 19 19:03:15 UTC 2014


LAFS Tesla Coils & Corpses, 2014-09-19
======================================

in attendance: Zooko (scribe), Brian, Nathan

Brian has been hacking on git-lockup and versioneer. There's a robot,
and when you tell that robot to go, then it opens up its chest cavity
and pulls out a smaller robot. And when you tell *that* robot to go,
then it pulls out an even smaller robot…

 https://github.com/warner/git-lockup
 https://github.com/warner/python-versioneer

The goal is to have an easy way to configure your git repo so that
whenever you do a "git pull", it verifies an end-to-end digital
signature on the new patches, and it refuses to apply any patches that
don't come with such a signature.

So the git-lockup setup.py needs to assemble a tool called "git-lockup"
from these pieces.

The idea is that this is something you could install to your "bin/"
directory, pip install, whatever, Debian could package it.

It has two main functions. One of them is to set up a publisher's tree.
So you run "git-lockup publish", and it adds the medium-sized robot into
your source tree, marks it for checkin and reminds you that you need to
check it in. It generates a key pair …

Zooko interrupted to say that "git lockup" was a terrible name because
"lockup" means deny access — make the thing unavailable. We brainstormed
for a bit and the leading candidate at the end was "git-fanclub".

So, it generates a keypair…

There are four times that git-fanclub runs:

* You can run "git-fanclub setup-publish" to set up a publishing tree,

* you can set up your post-commit hook to automatically run it when
  you make a commit, so that it will make a digsig on your commit,

* and as a member of the fanclub, in a newly checked-out tree, you run
  it in order to configure that tree to check signatures,

* and then fourth, whenever you do a pull, it checks the signatures
  before applying the patches.

Nathan is working on two projects that are similar to things Brian is
doing or has done. One of them is git-remote-lafs, so that you can do
things like "git push to a Tahoe-LAFS dircap". The other is called
"connect to certificate", which is a design and specification document
and some tools to augment domain-names by packing in a hash of the
certificate somewhere into the domain name, and requiring that
certificate to be used by the server on a TLS connection to that domain
name.

 https://github.com/nejucomo/c2c

It is related to an earlier nascent project that Zooko, Brian, and
Nathan talked about a few years ago called "PMAGH" (Project "Save The
Internet" -- no no no -- Project "Merely A Good Hack").

(brian takes over notetaking)

Basic idea is to provide a new DNS-name format, like
"10.1.2.3.ip4.ozxn27aat77tlyl2lu2muv4eye.a.c2c", which contains both the
connection hint (10.1.2.3.ip4), the hash of the expected SSL cert/pubkey
(ozxn27aat77tlyl2lu2muv4eye), and some versioning information (a). Then
implement a python function, with the same API as the normal stdlib
make-an-SSL-connection function, that takes these names instead, and
only produces a connection object if the server's SSL cert matches the
expected hash.

This removes the certificate authorities from the security reliance set
(the provider of your c2c name is the only one who can control who you
talk to, not the CA roots). It also removes DNS servers from the
availability reliance set (DNS failures won't stop you from making a
connection), although it also hard-codes a single IP address into the
name.

On top of this, you could then build curl/wget/netcat workalikes that
know about the new format, a SOCKS5 proxy which can provide access, and
browser modifications to enable the new kind of name. These all benefit
from the tighter security properties of the end-to-end name.

One problem: new browser features (like Service Workers, some webcrypto
stuff) are restricted to HTTPS-sourced domains (or localhost). If the
browser is using c2c names through a SOCKS proxy and is thus unaware of
the improved security properties, an (http+.c2c) URL will not get to use
these new features. Using (https+.c2c+SOCKS) would result in the
browser's (CA-based) SSL encryption happening on top of the
(hash-of-cert-based) SSL provided by c2c SOCKS proxy.

One fix we've discussed before would be to use normal DNS names with a
special "dispatcher" 2LD and an embedded hash, like
ozxn27aat77tlyl2lu2muv4eye.c2c.net . This would use normal DNS for
routing. If the server gets a normal (CA-based) SSL certificate, then
unaware clients at least get CA-level validation, and c2c-aware clients
get the improved hash-based validation.

However this requires hard-to-obtain delegate-to-subdomain certs. These
ought to be trivial: getting a cert for foo.com should obviously give
the recipient the authority to sign their own certs for
subdomain.foo.com . However X509 wasn't really designed with this in
mind (http://tools.ietf.org/html/rfc5280#section-4.2.1.10 defines the
"Name Constraints", but adoption is low), and CAs would much rather sell
you individual certs for every subdomain. So we call these "unicorn
certs": things that should exist but don't.

Using a wildcard cert wouldn't help: you'd have to share the private key
with every subdomain that registered, so it wouldn't stay private for
long. Maybe CloudFlare's new SSL-proxying scheme could help: clients
would connect to the end server, they'd be told about a wildcard cert,
but behind the scenes the end server would need the cooperation of the
dispatcher (which holds the private key) to perform the SSL negotiation.

Other issues: would it be safe to use a new special TLD (like .c2c)? In
particular, do we need to worry about ICANN ever allocating it for
normal use? We use .onion addresses without concern, and I think it's
unlikely that ICANN would allocate it (enough people know about .onion
by now), but a new one that we make up might not remain so lucky. There
are some policies and rules about DNS names that exclude certain ones (I
think x-* is forbidden, I know xn-* is used for internationalized names,
and I think the big registrars have single-letter 2LDs reserved except
for some grandfathered cases like x.org).

It's hard to figure out how exactly to override OpenSSL's
cert-validation code. Brian did this for Foolscap in
https://github.com/warner/foolscap/blob/master/foolscap/crypto.py , but
the hook function gets called one cert at a time (whereas we might want
to pass judgment on the entire chain at once). It's also easy to
accidentally allow completely invalid signatures.

Should the verifier check x509 usage flags? If we don't (i.e. if the
hash that goes into the c2c name is a hash of the pubkey, not a hash of
the entire cert), would we become vulnerable to mixed-use attacks (like
where someone gets a cert for code-signing, and then sneakily uses it
for TLS connections)?. We might want to hash the whole cert, or maybe
even the whole cert chain.

Nathan would love to be able to use this scheme for uncooperative /
unaware TLS servers, like if you could point your browser at
https://google.com and then ask it for a C2C name for the site. It would
fetch the TLS cert and DNS information and give you a (nailed-down) c2c
name that would be guaranteed to get you back to the same server/cert
pair, despite subsequent DNS changes or CA forgeries. However this
depends a lot upon how that site manages their certs. Worst case, they
get a new cert+pubkey+CA for each connection. In slightly better cases,
they use the same CA but get new certs/pubkeys, or use the same CA and
pubkey but new (renewed) certs. Sites have various ways to manage server
farms and key rotation, and ideally the c2c scheme would make it
possible to recognize when two different certs are "close enough" to
represent the same site.

This issue has been approached by other tools in the past: Tyler Close's
"Petname Toolber" (an extension for Firefox) used (I think)
hash-of-pubkey to identify a site, or maybe hash-of-CA-pubkey plus
subject-name. The goal was that the petname wouldn't become invalid when
the site renewed their cert.

cheers,
 -Brian



More information about the tahoe-dev mailing list