On distrusting trust.
Dec. 12th, 2003 11:55 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Enigmail keeps nagging me to set it up properly. (It came with my copy of Thunderbird.) But I can't remember my GPG pass phrase. I should be able to, as it's based on [garble] of [garble], which I can't imagine forgetting. It's just I need to [garble]. I could just create a new one, but giving it verifiably to the few people who care would be even more of a nuisance.
I don't remember it because I don't use it. Cryptographic protection of privacy sounds like a good idea and something I should do. But I don't get the public-key infrastructure model at all. Keep in mind that I am intelligent and technically literate and have had ten years' geek culture exposure to the concept. How would one communicate it to someone without that at all?
(Excuse me while I pontificate on a subject I admit I don't understand properly.)
The user is overwhelmingly the most insecure part of any network. The company I work for has all sorts of security policies, but the users are scientists and swap passwords the way they swap information. A security system that leaves any user writing passwords on Post-It notes is fundamentally broken. Most credit card fraud is by people and companies you gave the number to, not someone eavesdropping on the transaction. When your restricted LJ post gets out, you know damn well it was cut-and-paste fairies.
Trust is not transitive - even if you don't confuse the technical and conventional meanings of the word. Just because someone signs someone else's key, why on Earth should I put the same trust in that as I do in my personal verification? I suspect a variation of geek social fallacy #4. And even with one's closest friends, trusting them in one respect in no way implies trusting them in another.
I find someone's writing style a surer verification of identity - their writing is their public self on the Net. If the cryptographic signature was right but the writing style was wrong, I would first assume their computer had been cracked rather than that they had suddenly acquired a jarringly foreign turn of phrase.
The Internet threat model - completely secure computers at either end, possibly-compromised wires in the middle - is completely arse-backwards. Pretty much no-one is eavesdropping (modulo insecure WiFi), but if you put the average Windows computer out on the wild Net, you may as well grease up, bend over and put up a neon sign flashing COME AND GET IT.
(This is why all the silly crap your web browser does when it comes to a 'secure' page that hasn't bothered paying protection money to Verisign seems to make no goddamn sense - it's because it actually doesn't.)
There are many people reading who know this stuff better than I ever will. I ask you to take the time to shred the above.
(no subject)
Date: 2003-12-12 04:01 am (UTC)(no subject)
Date: 2003-12-12 04:03 am (UTC)The links seem to indicate concern over these matters has been discussed at length. How would someone who knows it thoroughly explain it simply to someone who knows nothing at all about it?
(no subject)
Date: 2003-12-12 04:23 am (UTC)(no subject)
Date: 2003-12-12 04:33 am (UTC)(no subject)
Date: 2003-12-12 05:17 am (UTC)I can't find a thing to disagree with in what you write. SSL is for the most part a complete waste of time - the attacks it protects against are not mostly those it is practical to mount. The "Verisign Tax" you pay to avoid users of your website getting nasty warning messages contributes almost nothing to security. The information you sign when you sign a key is not that you need to know in order to choose which key to use for encryption and authentication. And given the choice between dancing pigs and security, users will choose the dancing pigs every time.
For the record, I have probably sent or receieved at most a dozen encrypted emails in my life.
Apart from the parlous state of Windows security and its disastrous consequences for security everywhere, a lot of what you write about actually comes down to the way that PKI has been thoroughly misconsidered over the years. Here's some of what I've written about it, with links to people I agree with:
http://www.livejournal.com/users/ciphergoth/110893.html
(no subject)
Date: 2003-12-12 05:37 am (UTC)http://www.mail-archive.com/cryptography%40metzdowd.com/msg01276.html
Subsequent discussion is good; I also agree with Rescorla's point that if we know how to protect against attacks A but not B, and B is a greater danger, we might as well protect against A anyway.
(no subject)
Date: 2003-12-12 05:45 am (UTC)I'd love to know who Patient Zero of the PKI "trust shall be transitive" meme was.
(no subject)
Date: 2003-12-12 05:46 am (UTC)(no subject)
Date: 2003-12-12 04:33 am (UTC)(no subject)
Date: 2003-12-12 07:26 am (UTC)(no subject)
Date: 2003-12-12 06:20 am (UTC)(no subject)
Date: 2003-12-12 06:42 am (UTC)(no subject)
Date: 2003-12-12 07:18 am (UTC)(no subject)
Date: 2003-12-12 05:09 am (UTC)I still hear fairly regular reports of packet sniffers being installed on compromised systems, though I've always assumed that they're after passwords, not email. Perhaps they look for credit card numbers too now.
Governments do eavesdrop on email - obviously how much they are likely to look at yours, and how much this matters to you, depends which government you're talking about, what you're up to and how much you value your privacy. (But we already know that (i) people planning kilodeath terrorist attacks don't bother encrypting their email and (ii) governments know how to compromise the end systemsm.)
I bet writing styles can be relatively easily faked.
(no subject)
Date: 2003-12-12 05:40 am (UTC)1. Bad writers, whose particular problems can be imitated mechanically (particular misspellings, grammatical errors, etc).
2. Middling writers. These all write like each other and are hard to distinguish.
3. Good writers with their own style. These are the people you identify immediately without having to read the From: line.
1. are easy to fake on the surface, though being aware of an underlying style would be a bit more effort. 2. aren't very distinguishable from the writing itself. 3. are quite difficult to imitate convincingly (rather than, say, parody).
(no subject)
Date: 2003-12-12 10:54 am (UTC)(no subject)
Date: 2003-12-12 02:26 pm (UTC)The other factor I find is that it's quite difficult for me to write in a manner that doesn't immediately sound like me - I think exactly how I write.
(no subject)
Date: 2003-12-12 06:13 am (UTC)Personally I stopped signing all my emails with evolution/GPG because it used the neat and tidy MIME method, which apparently pissed off a small minority of the people I correspond with. Most particularly, Outlook can't cope with it, and neither can Mutt I think - they render the body text into a text attachment of a blank email, or something equally helpful.
(no subject)
Date: 2003-12-12 06:33 am (UTC)"Is the depth to which you trust configurable?"
I maintain that any level beyond those I've verified to my own satisfaction is a ridiculous idea. Some of the amount of verification I've seen crypto zealots claim was needed, I wouldn't know about people I've gone out with for significant lengths of time ...
(no subject)
Date: 2003-12-12 06:16 am (UTC)But I use https all the time.
I use it when I trust the other end at least somewhat and I don't want anyone sniffing the password.
It doesn't make sense to secure a system if you're going to have people sniffing passwords and then using that to root the server or fuck with the service.
(no subject)
Date: 2003-12-12 06:29 am (UTC)Heh,
As far as verisign goes it's pretty useless. The only real use I can see for it is something similar to a badge.
I would argue that trust is transitive to some small extent. Not in the sense of being able to trust a signed site as much as you trust verisign or whoever signed the key.
But with something like verisign where they're being paid money to supposedly prove that a site really is what it says it is, you can put a certain small level of trust in that if they fuck up too much no-one will trust them.
Think of it as branding more than trust. Like where a brand of food causes a wave of food poisoning you can know to avoid that brand and give other brands a go instead.
An unsigned certificate is equivalent to a brand you don't know anything about.
(no subject)
Date: 2003-12-12 06:36 am (UTC)Actually, it's an incoherent ramble in point form ;-)
(no subject)
Date: 2003-12-12 02:45 pm (UTC)(no subject)
Date: 2003-12-12 06:32 am (UTC)It can be, and that's how that works in PGP. If I see one person I absolutely trust (both not to lie, and not to screw up a keysigning) has signed someone's key, then I'm fine; and if I see that five hundred people, about a hundred of which I think have clue not to screw up, have signed someone's key, then I'll probably trust it. That's why PGP has a bunch of different trust levels in the first place; section 3 of this paper (http://www.cs.ucl.ac.uk/staff/F.AbdulRahman/docs/pgptrust.html) talks about trust levels. The whole paper is a useful read, really.
In short, the answer to your question is "You shouldn't put the same level of trust in that as your personal verification; put slightly less, but slightly more than if it had nothing at all. Here, have a tool to do exactly that."
(no subject)
Date: 2003-12-12 06:35 am (UTC)(no subject)
Date: 2003-12-12 06:44 am (UTC)(no subject)
Date: 2003-12-12 12:27 pm (UTC)(no subject)
Date: 2003-12-12 01:50 pm (UTC)http://www.cryptnet.net/fdp/crypto/gpg-party.html
Fuck, I haven't required that level of ID verification from people I've shacked up with. These geeks really think this will build a 'web of trust' involving someone other than
fanatic drones for the Causethose with an obsession with "Key signing parties also serve as great opportunities to discuss the political and social issues surrounding strong cryptography, individual liberties, individual sovereignty, and even implementing encryption technologies or perhaps future work on free encryption software." It really is Geek Social Fallacy #4 as a Taylorised procedure.I think the essential problem I have is that it tries to make social trust into the binary absolute of cryptographic trust, and so looks like it was created by people with no damn clue what social interaction is. Social interaction is all about the grey areas.
(no subject)
Date: 2003-12-12 06:34 am (UTC)Disagree -- I want the possibility of allowing access to any port of my home computer to any traffic I so choose. In other words, I want the possibility of a free flow of untampered information from any port of any computer in the world to any port of my machine. I dislike in the extreme ISP level firewalling which would block me from sending or receiving. The correct (indeed only sensible) place for computer security is on the machine you wish made secure.
I can see why companies implement their own firewalls -- for their protection. But it sucks. As a user I want to be able to make my own choice about what is and isn't open.
The secure the ends and let the middle be free approach is certainly the best.
(no subject)
Date: 2003-12-12 06:36 am (UTC)(no subject)
Date: 2003-12-12 06:44 am (UTC)If I get my credit card details nicked online, I fully expect it to be because an end system (mine or theirs) is compromised because, although there's a vast number of intervening systems they are largely competently adminstered and secured.
(no subject)
Date: 2003-12-12 06:46 am (UTC)(no subject)
Date: 2003-12-12 06:54 am (UTC)The reason that the internet protocols concentrate on making a secure end-to-end transaction is because that is what they were designed to do (they are internet protocols, their job wasn't ensuring that the end system was secure but was ensuring that the communication was secure).
(no subject)
Date: 2003-12-12 07:07 am (UTC)(no subject)
Date: 2003-12-12 07:13 am (UTC)(no subject)
Date: 2003-12-12 08:16 am (UTC)(no subject)
Date: 2003-12-12 08:32 am (UTC)From: http://www.ietf.org/rfc/rfc1108.txt
This option is used by end systems and intermediate systems of an internet to:
a. Transmit from source to destination in a network standard representation the common security labels required by computer security models,
b. Validate the datagram as appropriate for transmission from the source and delivery to the destination,
c. Ensure that the route taken by the datagram is protected to the level required by all protection authorities indicated on the datagram. In order to provide this facility in a general Internet environment, interior and exterior gateway protocol must be augmented to include security label information in support of routing control.
---
I'm not at all sure how often used or how supported it is.
(no subject)
Date: 2003-12-12 07:05 am (UTC)(BTW, kuvert and q-agent make doing nearly everything automagic nearly as easy as Just Writing An Email.)
(no subject)
Date: 2003-12-12 11:39 am (UTC)(OK, signing is still solving the inverse problem; you ideally want to be able to prove you didn't send a message, but establishing a pattern of signing is better than nothing).