If you guys are serious about being the latest cast and crew to pursue this, here's a resource you might want to be aware of: http://www.imc.org/ietf-openpgp/ You also might want to look at this message from their mailing list archive: http://www.imc.org/ietf-openpgp/mail-archive/msg02132.html They have a pgp-directory mailing list, and all of this has been discussed before, so perhaps it can save you some research.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Keyservers are officially out of the scope of the OpenPGP working group's charter, last I heard. Jon Callas (author/editor of RFC 2440) suggested that if there was interest, we start a working group on keyserver behaviors. There are many issues too detailed to go into here that should be formally addressed in regards to keyservers. If the keyserver infrastructure does ramp up, I think that an RFC will be in order. Anyone with the IETF here? Who would I talk to about forming such a WG? Is a keyserver standard within the scope of the IETF? On Tue, 27 Jun 2000, Shawn McMahon wrote:
If you guys are serious about being the latest cast and crew to pursue this, here's a resource you might want to be aware of:
http://www.imc.org/ietf-openpgp/
You also might want to look at this message from their mailing list archive:
http://www.imc.org/ietf-openpgp/mail-archive/msg02132.html
They have a pgp-directory mailing list, and all of this has been discussed before, so perhaps it can save you some research.
__ L. Sassaman System Administrator | "Everything looks bad Technology Consultant | if you remember it." icq.. 10735603 | pgp.. finger://ns.quickie.net/rabbi | --Homer Simpson -----BEGIN PGP SIGNATURE----- Comment: OpenPGP Encrypted Email Preferred. iD8DBQE5WQPfPYrxsgmsCmoRAhp1AJ48hfG4xci33ztbyxdKtj4KFAiK1wCgvasC nATU/P0GELbazSEhdZetw4Y= =2RVA -----END PGP SIGNATURE-----
L. Sassaman: Tuesday, June 27, 2000 12:43 PM
There are many issues too detailed to go into here that should be formally addressed in regards to keyservers. If the keyserver infrastructure does ramp up, I think that an RFC will be in order. Anyone with the IETF here? Who would I talk to about forming such a WG? Is a keyserver standard within the scope of the IETF?
I get a real good chuckle out of this thread.<g> 1) Randy hisself is a promenent member of the IETF. 2) Having co-chaired a WG, I suspect that randy may even know how it's done. 3) I'd bet a small amount of change that Randy has already started the wheels in motion, even before he sent the first message. 4) I suspect that this thread exists to measure the level of interest among the major players. Now for something on-topic; Yes, Internet PKI, in it's present state, sucks. Yes, there is a need, but the architecture definitely needs a look-see. Personally, I think it grossly inadequate and there ain't no way that it can be made as reliable as DNS, in it's present form. It's basically a poor-man's TLS with about half the fore-thought. Personally, I've been working with X.509 certs as an improvement over basic PGP, but again, the PKI sucks there as well. But, as a previous poster already brought to surface, the users must have an interest in this service or NONE of the ISPs will be interested in deployment. The reason that existing PKI sucks is mainly a lack of serious user interest. There are NO production-level PKI servers out there today. None of them will commit to an SLA and there are too few customers that will pay the required bucks to support a decent SLA, for a PKI infrastructure. Build it and they will NOT come, yet. As usual, this is only an opinion --- R O E L A N D M . J . M E Y E R CEO, Morgan Hill Software Company, Inc. An eCommerce and eBusiness practice providing products and services for the Internet. Tel: (925)373-3954 Fax: (925)373-9781
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 27 Jun 2000, Roeland Meyer (E-mail) wrote:
I get a real good chuckle out of this thread.<g> 1) Randy hisself is a promenent member of the IETF. 2) Having co-chaired a WG, I suspect that randy may even know how it's done. 3) I'd bet a small amount of change that Randy has already started the wheels in motion, even before he sent the first message. 4) I suspect that this thread exists to measure the level of interest among the major players.
Well, if this is truly the case, that is wonderful. I'd like to hear Randy's thoughts on a Keyserver WG, however.
Now for something on-topic; Yes, Internet PKI, in it's present state, sucks. Yes, there is a need, but the architecture definitely needs a look-see. Personally, I think it grossly inadequate and there ain't no way that it can be made as reliable as DNS, in it's present form. It's basically a poor-man's TLS with about half the fore-thought. Personally, I've been working with X.509 certs as an improvement over basic PGP, but again, the PKI sucks there as well.
Could you elaborate on these statements?
But, as a previous poster already brought to surface, the users must have an interest in this service or NONE of the ISPs will be interested in deployment. The reason that existing PKI sucks is mainly a lack of serious user interest. There are NO production-level PKI servers out there today. None of them will commit to an SLA and there are too few customers that will pay the required bucks to support a decent SLA, for a PKI infrastructure. Build it and they will NOT come, yet.
Well, my opinion may be clouded, since i am on the Keyserver team at NAI... but our PGP Keyserver is used by numerous companies in production-level situations to manage large PGP-based PKIs. The problem as I see it is not the software quality (Highware and Marc Horowitz's folks have also done an excellent job on their servers) but in the hardware and network resources allocated to the public keyserver network. __ L. Sassaman System Administrator | "Everything looks bad Technology Consultant | if you remember it." icq.. 10735603 | pgp.. finger://ns.quickie.net/rabbi | --Homer Simpson -----BEGIN PGP SIGNATURE----- Comment: OpenPGP Encrypted Email Preferred. iD8DBQE5WRP/PYrxsgmsCmoRArvvAKCjpTZLV3IuG5g81Q0gK2/9g6JtAwCeNwZv 2NXG40U0lRj8HpFbeNBk/U4= =imkR -----END PGP SIGNATURE-----
L. Sassaman: Tuesday, June 27, 2000 1:52 PM
On Tue, 27 Jun 2000, Roeland Meyer (E-mail) wrote:
But, as a previous poster already brought to surface, the users must have an interest in this service or NONE of the ISPs will be interested in deployment. The reason that existing PKI sucks is mainly a lack of serious user interest. There are NO production-level PKI servers out there today. None of them will commit to an SLA and there are too few customers that will pay the required bucks to support a decent SLA, for a PKI infrastructure. Build it and they will NOT come, yet.
I see it is not the software quality (Highware and Marc Horowitz's folks have also done an excellent job on their servers) but in the hardware and network resources allocated to the public keyserver network.
I think that was my point. Resources get determined by whatever it takes to meet a reasonable SLA.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 27 Jun 2000, Randy Bush wrote:
Well, if this is truly the case, that is wonderful.
ymmv
I'd like to hear Randy's thoughts on a Keyserver WG, however.
the first question would be whether a protocol is being proposed for adoption or an operational best current practice being documented?
There have been several "next generation" protocols that have been proposed for keyserver syncronization. All of these assume that the current "best practice" will not scale due to the limited bandwidth and disk storage capacities of the volunteer keyserver hosts. If we can assume that sufficient bandwidth and drive space/server power is available to us (which it looks like you believe, Randy), then I think that we should simply go ahead and document the current practice and formalize it. __ L. Sassaman System Administrator | "Everything looks bad Technology Consultant | if you remember it." icq.. 10735603 | pgp.. finger://ns.quickie.net/rabbi | --Homer Simpson -----BEGIN PGP SIGNATURE----- Comment: OpenPGP Encrypted Email Preferred. iD8DBQE5WUAXPYrxsgmsCmoRAoJ7AJ9UzQDDogLXAj9z+GNIcVSMB1whFACg2/Ab lDRX9G/dcl5yLtX5M/G9HSE= =GfkA -----END PGP SIGNATURE-----
There have been several "next generation" protocols that have been proposed for keyserver syncronization. All of these assume that the current "best practice" will not scale due to the limited bandwidth and disk storage capacities of the volunteer keyserver hosts. If we can assume that sufficient bandwidth and drive space/server power is available to us (which it looks like you believe, Randy), then I think that we should simply go ahead and document the current practice and formalize it.
see rfc 2223 randy
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 27 Jun 2000, Randy Bush wrote:
There have been several "next generation" protocols that have been proposed for keyserver syncronization. All of these assume that the current "best practice" will not scale due to the limited bandwidth and disk storage capacities of the volunteer keyserver hosts. If we can assume that sufficient bandwidth and drive space/server power is available to us (which it looks like you believe, Randy), then I think that we should simply go ahead and document the current practice and formalize it.
see rfc 2223
I have a copy printed. I was under the impression, however, that it was generally considered proper to have a working group's influence on an RFC during the writing process. Would it be proper for someone (such as myself) to simply write an RFC documenting the best current practice? Should this come prior to the formation of a working group (if one indeed occurs?) Pardon my ignorance... writing RFCs is not part of my experience. - --Len. __ L. Sassaman System Administrator | "Everything looks bad Technology Consultant | if you remember it." icq.. 10735603 | pgp.. finger://ns.quickie.net/rabbi | --Homer Simpson -----BEGIN PGP SIGNATURE----- Comment: OpenPGP Encrypted Email Preferred. iD8DBQE5WUZoPYrxsgmsCmoRAgWNAJ9RU3oP2Bz8ogJIsxO7QmcG65H+hgCgrS0e MgCiaYkAnSvMwMgiE1OozVc= =EXl5 -----END PGP SIGNATURE-----
Would it be proper for someone (such as myself) to simply write an RFC documenting the best current practice? Should this come prior to the formation of a working group (if one indeed occurs?)
i think we are using the wrong mailing list for this discussion. but a wg is spun up o if there is work to be done that needs development and consensus. as you sound like you plan to document an existing protocol, this is not the case o if you do think there is work to be done that needs development and consensus, then we usually see if there is sufficient momentum by having a bof o these days, unless a topic is obviously hot, to get a bof slot needs a bit of homework, namely an active discussion usually happing on a work-item-specific mailing list and an internet-draft either published (as -00 or whatever) or at least well along in process i suspect that this discussion is best held elsewhere. i would have said the pgp-keyserver-folk@flame.org list, except it seems to be oriented toward one semi-commercial product. but i am most likely misconstruing it. randy
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, 27 Jun 2000, Randy Bush wrote:
Would it be proper for someone (such as myself) to simply write an RFC documenting the best current practice? Should this come prior to the formation of a working group (if one indeed occurs?)
i think we are using the wrong mailing list for this discussion.
but a wg is spun up o if there is work to be done that needs development and consensus. as you sound like you plan to document an existing protocol, this is not the case o if you do think there is work to be done that needs development and consensus, then we usually see if there is sufficient momentum by having a bof o these days, unless a topic is obviously hot, to get a bof slot needs a bit of homework, namely an active discussion usually happing on a work-item-specific mailing list and an internet-draft either published (as -00 or whatever) or at least well along in process
i suspect that this discussion is best held elsewhere. i would have said the pgp-keyserver-folk@flame.org list, except it seems to be oriented toward one semi-commercial product. but i am most likely misconstruing it.
Yep. The pgp-keyserver-folk list is independant of any commercial product, and covers all three existing keyservers. Let's discuss this there. (For the keyserver-folks people now entering this conversation, we're discussing the need for an RFC or other suitable documentation, if we are to get any involvement from major network service providers -- that's the conversation in a nutshell.) - --Len. __ L. Sassaman System Administrator | "Everything looks bad Technology Consultant | if you remember it." icq.. 10735603 | pgp.. finger://ns.quickie.net/rabbi | --Homer Simpson -----BEGIN PGP SIGNATURE----- Comment: OpenPGP Encrypted Email Preferred. iD8DBQE5WUmhPYrxsgmsCmoRAlpLAKDOPhSlTYbggHJjyRB+H2TOtWwI0gCgodqB hgB0Ifj48sa/JsdQ0LwKkEo= =/ckQ -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 A couple of people have emailed me and asked exactly what the resource requirements for a keyserer are. I asked Randy Harmon, the administrator for certserver.pgp.com, to see if he could answer that question for me. His response is below. Thanks, __ L. Sassaman System Administrator | Technology Consultant | "Common sense is wrong." icq.. 10735603 | pgp.. finger://ns.quickie.net/rabbi | --Practical C Programming - -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Many parts of the answer to that question: - The replication infrastructure affects the bandwidth used. Take a 10-node keyserver network. If one is the master for accepting keys, and 9 accept replications from it (without cross-replicating), then the bandwidth used is far different than if each replicates all changes it receives to each of the other 9. In the first case, each one would replicate its news to the master, which would replicate back to all 9 (or 8, arguably). - Keyservers do get out of sync with each other (replications not arriving, replicas unavailable) and should be periodically cross-sync'd. Databases can also become corrupted and should be periodically rebuilt (combined with cross-syncing, ideally). These rebuilds help make up for the possibility of failure in the more-efficient replication mechanisms, and should probably use a "pull" approach rather than a "push" approach. So the master server could pull keysets from its replicas, rebuild and merge its own database with the keysets from others, then send a notification that a database update is available. This approach would use, in the 10-server example, 1.2kb per key x 2 transmissions x 9 servers - The disk hardware should be capable of withstanding the search volume (depends on the number/types of searches performed) and of rebuilding the databases without major impact on search performance. I'd suggest 8-15 gigs of FREE space, beyond ~6 gigs for the main database. Single drives are OK for low search volume and low search performance. Larger database means more disk space, approximately linear. Modulo differences in indexing techniques, as the number of keys continues to grow very large. - Day-to-day bandwidth is a function of the number of replications triggered by a key add/update, the number of keys added/updated (roughly 4/3 * 1.2kb per key added/updated) and the number/types of searches performed. We currently receive about 20,000 searches per day (5 keys returned per search, on average), about 1500 adds, and about 250 updates. For today's volume with the more efficient replication approach discussed above, the master server would consume: Searches: 100,000 * 1.2k * 4/3 radix-64 overhead = 160 MB Adds/Mods: 1750 * 1.2k * 4/3 overhead = 2.8 MB Replications: 2.8 MB * 9 = 26 MB If we assume 10 times the volume as today (1.1 million keys on the server), and 10 servers to balance the search load, then the bandwidth for each server would be roughly: Searches: 160 MB Adds/Mods: 28 MB Replications: 28 MB outgoing 280 MB incoming ============= 496 MB/day 15 GB/month Multiply by 10 for 100,000,000 keys. Randy - - -------- Randy Harmon <rjh@pgp.com> Administrator, certserver.pgp.com Engineer, PGP Keyserver PGP KeyID: 0x5cb7b7f2a0aa5c1e - -----BEGIN PGP SIGNATURE----- Version: PGP 6.5.3 iQA/AwUBOVp8UVy3t/KgqlweEQKvdACfWZg0IP0TM9L95y5Kr7uRK9rIsJIAoOw6 JmG9SvmaLuGF369Z5PQF3Jfy =QbGa - -----END PGP SIGNATURE----- -----BEGIN PGP SIGNATURE----- Comment: OpenPGP Encrypted Email Preferred. iD8DBQE5Wn8TPYrxsgmsCmoRAlMcAJ0cNO/FmBjVEMOsYpP7IzYbSR0C8wCggvhA GpaC1VIyTtUZrzigZl2JyeA= =mlcy -----END PGP SIGNATURE-----
On Wed, 28 Jun 2000, L. Sassaman wrote:
requirements for a keyserer are. I asked Randy Harmon, the administrator for certserver.pgp.com, to see if he could answer that question for me. His response is below.
<snip>
If we assume 10 times the volume as today (1.1 million keys on the server), and 10 servers to balance the search load, then the bandwidth for each server would be roughly:
Searches: 160 MB Adds/Mods: 28 MB Replications: 28 MB outgoing 280 MB incoming ============= 496 MB/day 15 GB/month
Multiply by 10 for 100,000,000 keys.
Randy
OK. So if we take 15GB/mo, it's aproximately 45Kb/s of bandwidth. Even at 3rd Tier pricing, I don't know anyone who uses PGP who wouldn't be willing to donate $45/mo to the cause. If someone wants to donate the hardware, we'll gladly host a keyserver. --- John Fraizer EnterZone, Inc
my opinion is in some big providers near big peering and big pipes. kinda like we do with ftpN.freebsd.org, cvsupN.freebsd.org, etc. randy
Randy, I suspect that, with the relevant approvals, placing one of these key servers at the LINX might be possible. I'll try and get this added to the agenda for the next meeting. Neil.
my opinion is in some big providers near big peering and big pipes. kinda like we do with ftpN.freebsd.org, cvsupN.freebsd.org, etc.
randy
I suspect that, with the relevant approvals, placing one of these key servers at the LINX might be possible. I'll try and get this added to the agenda for the next meeting.
thanks neil. the problem i see before scaling this up is how the access/naming/referral works. a user wants to just say "look this up" and not to have to think about how to choose from some magic list of 42 possible servers. it is not clear to me how this happens. randy
On Thu, Jun 29, 2000, Randy Bush wrote:
I suspect that, with the relevant approvals, placing one of these key servers at the LINX might be possible. I'll try and get this added to the agenda for the next meeting.
thanks neil. the problem i see before scaling this up is how the access/naming/referral works. a user wants to just say "look this up" and not to have to think about how to choose from some magic list of 42 possible servers.
it is not clear to me how this happens.
Its the same problem facing any distributed service. People have been throwing around various solutions, but I personally don't think any solution which doesn't let the client choose is ever going to work. Adrian -- Adrian Chadd Build a man a fire, and he's warm for the <adrian@creative.net.au> rest of the evening. Set a man on fire and he's warm for the rest of his life.
thanks neil. the problem i see before scaling this up is how the access/naming/referral works. a user wants to just say "look this up" and not to have to think about how to choose from some magic list of 42 possible servers. Its the same problem facing any distributed service. People have been throwing around various solutions, but I personally don't think any solution which doesn't let the client choose is ever going to work.
get outrageous and consider the v4 anycast hack. for an example see <draft-ohta-root-servers-02.txt>. randy
Going down one level of abstraction, has anyone on this list checked out http://www.openca.org http://www.openssl.org Most modern mailers support X.509 certs for encryption. PGP is considerd, by many, to be the older technology. Building PKI around X.509 is much easier and meets actual existing standards.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thu, 29 Jun 2000, Roeland M.J. Meyer wrote:
Going down one level of abstraction, has anyone on this list checked out http://www.openca.org http://www.openssl.org
Most modern mailers support X.509 certs for encryption. PGP is considerd, by many, to be the older technology. Building PKI around X.509 is much easier and meets actual existing standards.
Snort. Actually, that's an untrue statement on multiple points. X.509 is a much older and cruftier standard. PGP is recognised by most to be the superior method for handling email and file encryption and signing. X.509 is designed to satisfy situations where there is a complex heirarchy in an X.500 setting. I have yet to find anything "easy" about X.509. OpenPGP (which is the term for the draft standard on which PGP, GnuPG, and other products like SafeMail are based -- see RFC 2440) is much simpler for the end user to adopt. Note, also, that it is extremely easy to bind an X.509 cetificate to an OpenPGP key, for instances where X.509 is necessary. You can also have multiple X.509 certificates bound to one OpenPGP key, all sharing the same key material. Much more convenient. If you want X.509, OpenSSL is excellent, though. I am the Project Lead for FreeCert (freecert.org) and we are using the OpenSSL toolkit with our development. OpenCA is cute, but I wouldn't design a CA based on perl code. __ L. Sassaman System Administrator | Technology Consultant | "Common sense is wrong." icq.. 10735603 | pgp.. finger://ns.quickie.net/rabbi | --Practical C Programming -----BEGIN PGP SIGNATURE----- Comment: OpenPGP Encrypted Email Preferred. iD8DBQE5XPdKPYrxsgmsCmoRAlkwAKD3rioArNPNz2d8bSLGKyoEizpLTwCgzgzm utInj001vBRLdksR6U81bZE= =Ddf+ -----END PGP SIGNATURE-----
Randy, You have hint the nail on the proverbial head. While I have limited experience in PGP infrastructure, I have spent a great deal of time with X.500 & X509 infrastructure (sympathy appreciated). One of the toughest service side nuts to crack is the resource location issue: where is my X server? Where is the X server for user Y? We all know this. The intensitity of papers & standards activity in this space bears this out. Each time someone invents an X service, they feel compelled to invent a namespace, a hierachical delegation system (perhaps), and some kind of root system that knows everything about everyone. Then the challenge becomes to bootstrap the world into using this system. Anyone remember the projects to bootstrap X.500 directory service on the Internet? I do, I used to work nearby a country X.500 root server. Anyone use X.500 today for directory service? No. Anyone use LDAP? Sure. Can anyone find LDAP servers for user Y in domain Z? Ummm... The Internet has been uniquely successful in introducing a namespace, a hierarchical delegation system, and a root system. We use this system to locate many services. One common one is email service. We use it ubiquitously. Noone argues about the "EMail Service Resource Location Protocol". We use the DNS. End of discussion. Other examples exist, such as http. Each has a slightly different way to interface to the DNS, but at least they are defined. The key service folk (PGP and anyone IETF-izing the X509 world, and the IPSEC folk for that matter) would be doing a Huge Service to Humanity if they simply *defined* the manner in which key servers will find each other using the DNS. Then, and only then, you can say with confidence that IF this domain has a key server THEN it is the case it is registered this way in the DNS. A new property can be asserted with confidence, and I can definatively attempt to locate a given service server for a given domain, and expect it to know something about user@domain. It appears to be the recent SRV records help tremendously in this matter. This appears (strongly) to be a nail-like problem, and we have a hammer in hand. I know this is naive, but we have working, scaling, *ubiquitous* examples that we all use. So, in this case, what is the issue with swinging? My suggestion is the IETF & IESG should insist that server to server location should be defined in the protocol spec, and not depend on a new registration & delegation system unless significant technical merit warrants something completely different. The developer community should Just Do It That Way. The PGP community could show leadership in this area. Sometimes, the better idea locally is not the better idea globally. Regards, Eric Carroll -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu]On Behalf Of Randy Bush Sent: Thursday, June 29, 2000 9:40 AM To: Neil J. McRae Cc: John Fraizer; L. Sassaman; nanog@merit.edu; pgp-keyserver-folk@flame.org Subject: Re: PGP kerserver infrastructure
I suspect that, with the relevant approvals, placing one of these key servers at the LINX might be possible. I'll try and get this added to the agenda for the next meeting.
thanks neil. the problem i see before scaling this up is how the access/naming/referral works. a user wants to just say "look this up" and not to have to think about how to choose from some magic list of 42 possible servers. it is not clear to me how this happens. randy
While I have limited experience in PGP infrastructure, I have spent a great deal of time with X.500 & X509 infrastructure (sympathy appreciated).
i watched that and see the parallel.
The key service folk (PGP and anyone IETF-izing the X509 world, and the IPSEC folk for that matter) would be doing a Huge Service to Humanity if they simply *defined* the manner in which key servers will find each other using the DNS.
i am not convinced. the email address space you describe maps well to the dns as it too is hierarchic (in fact is the identical hierarchy:-). the pgp key space is not obviously hierarchic, but rather a non-directed and cyclic graph. so using the dns, e.g. srv rrs, to find a keyserver is not a mapping so obvious that i can see it. unless you are suggesting that looking for the public key for randy@psg.com should follow the dns hierarchy for psg.com. this forces all keys ids to be domain name based, which is not a restriction in pgp. it also does not work in obvious ways for reverse lookup, though i can envision a hack similar to in-addr.arpa (yuck). randy
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Fri, 30 Jun 2000, Eric M. Carroll wrote:
The Internet has been uniquely successful in introducing a namespace, a hierarchical delegation system, and a root system. We use this system to locate many services. One common one is email service. We use it ubiquitously. Noone argues about the "EMail Service Resource Location Protocol". We use the DNS. End of discussion. Other examples exist, such as http. Each has a slightly different way to interface to the DNS, but at least they are defined.
Just to restate here: Currently, *all* servers serve *all* keys. Unlike an X.500 directory, it is very difficult to segment PGP keys into directories. How would one do this? Using DNS? Which domain would one choose to use for cataloging the keys? (ex.: My key has multiple email addresses, including quickie.net and pgp.com. Which domain would it be under?) It is the theory that one keyserver (provided it has 100% uptime, and 100% reliable synchronization with the rest of the servers) is sufficient for a person using the PGP Keyserver network. Each server is assumed to hold the entire world's keys. Multiple servers only exist for redundancy and performance benefits. Is this the best method? Probably not. There have been numerous proposals, for segmenting the public key collection, but none have been favored. Given sufficient drive space, this doesn't seem to be a big problem, however. Since the keyserver network could be viewed as simply one server, since each is a mirror of the rest, the only thing we need to focus on if we are to use the current model is how to send the user requesting a key to the closest, fastest keyserver. Directory structures don't play into this. - --Len. __ L. Sassaman System Administrator | Technology Consultant | "Common sense is wrong." icq.. 10735603 | pgp.. finger://ns.quickie.net/rabbi | --Practical C Programming -----BEGIN PGP SIGNATURE----- Comment: OpenPGP Encrypted Email Preferred. iD8DBQE5XONSPYrxsgmsCmoRAnPiAKC9TmoF0Dw7N8/XZGoXZwXvMJvemwCeMJbD EEBKwu6Zn4rqpHQKGAXuN98= =xAoO -----END PGP SIGNATURE-----
We are currently running a globally load balanced network with dedicated servers available in 15 (and rising) locations in the US and Europe. We would be happy to run a number of keyservers on our network. We are using the Foundry ServerIron's global server load balancing which uses a TCP syn/ack based round trip time metric to direct a client to the "closest" site. Does the key-service answer on a specific TCP port? If this sounds feasible please point us at info on how to set up a key-server. Thanks, Peter Francis Cerrato Sr. Network Engineer SoftAware Networks At 11:13 AM -0700 6/30/00, L. Sassaman wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Fri, 30 Jun 2000, Eric M. Carroll wrote:
The Internet has been uniquely successful in introducing a namespace, a hierarchical delegation system, and a root system. We use this system to locate many services. One common one is email service. We use it ubiquitously. Noone argues about the "EMail Service Resource Location Protocol". We use the DNS. End of discussion. Other examples exist, such as http. Each has a slightly different way to interface to the DNS, but at least they are defined.
Just to restate here:
Currently, *all* servers serve *all* keys. Unlike an X.500 directory, it is very difficult to segment PGP keys into directories. How would one do this? Using DNS? Which domain would one choose to use for cataloging the keys? (ex.: My key has multiple email addresses, including quickie.net and pgp.com. Which domain would it be under?)
It is the theory that one keyserver (provided it has 100% uptime, and 100% reliable synchronization with the rest of the servers) is sufficient for a person using the PGP Keyserver network. Each server is assumed to hold the entire world's keys.
Multiple servers only exist for redundancy and performance benefits.
Is this the best method? Probably not. There have been numerous proposals, for segmenting the public key collection, but none have been favored. Given sufficient drive space, this doesn't seem to be a big problem, however.
Since the keyserver network could be viewed as simply one server, since each is a mirror of the rest, the only thing we need to focus on if we are to use the current model is how to send the user requesting a key to the closest, fastest keyserver. Directory structures don't play into this.
- --Len.
__
L. Sassaman
System Administrator | Technology Consultant | "Common sense is wrong." icq.. 10735603 | pgp.. finger://ns.quickie.net/rabbi | --Practical C Programming
-----BEGIN PGP SIGNATURE----- Comment: OpenPGP Encrypted Email Preferred.
iD8DBQE5XONSPYrxsgmsCmoRAnPiAKC9TmoF0Dw7N8/XZGoXZwXvMJvemwCeMJbD EEBKwu6Zn4rqpHQKGAXuN98= =xAoO -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Fri, 30 Jun 2000, Peter Francis wrote:
We are currently running a globally load balanced network with dedicated servers available in 15 (and rising) locations in the US and Europe. We would be happy to run a number of keyservers on our network.
Wonderful!
We are using the Foundry ServerIron's global server load balancing which uses a TCP syn/ack based round trip time metric to direct a client to the "closest" site.
Does the key-service answer on a specific TCP port?
Yes. HKP Servers (which use a specialized HTTP connection) generally listen on tcp 11371. You can look at http://web.mit.edu/marc/www/pks/ for Marc Horowitz's original pksd, or at http://www.highware.com/main-oks.html for Highware's OpenKeyServer, or you can go to http://web.mit.edu/network/pgp.html to get NAI's Certserver. (The version there is 2.5.1. There is an upgrade version, 2.5.2, that you will need to patch to: http://www.tis.com/support/hotfix.html). NAI's Certificate Server only runs on Solaris and NT, but provides an LDAP and LDAPS interface (389 and 689, respectively by default). LDAP is a nicer interface for searching keyservers.
If this sounds feasible please point us at info on how to set up a key-server.
It's a generally straight-forward procedure. Once you have them up and running, I am sure the folks on the flame.org list will be happy to answer any questions about replication you might have. __ L. Sassaman System Administrator | Technology Consultant | "Common sense is wrong." icq.. 10735603 | pgp.. finger://ns.quickie.net/rabbi | --Practical C Programming -----BEGIN PGP SIGNATURE----- Comment: OpenPGP Encrypted Email Preferred. iD8DBQE5XPHnPYrxsgmsCmoRAtDhAJ4uk4zGK+wBBX1yqJ5rBM0NkSc7TwCg0RJc W5Qsq+jF3dUu/s1jihcWUb8= =Zv3w -----END PGP SIGNATURE-----
Same here - always read this list and rarely post. This has been one of the more informative threads. On Wed, 28 Jun 2000, Rick Irving wrote:
Randy Bush wrote:
i think we are using the wrong mailing list for this discussion.
Yeah! Don't JAM up NANOG with internet operational issue's.
;)
_____ Douglas Denault doug@safeport.com Voice: 301-469-8766 Fax: 301-469-0601
"Roeland Meyer (E-mail)" wrote:
L. Sassaman: Tuesday, June 27, 2000 12:43 PM
There are many issues too detailed to go into here that should be formally addressed in regards to keyservers. If the keyserver infrastructure does ramp up, I think that an RFC will be in order. Anyone with the IETF here? Who would I talk to about forming such a WG? Is a keyserver standard within the scope of the IETF?
I get a real good chuckle out of this thread.<g> 1) Randy hisself is a promenent member of the IETF. 2) Having co-chaired a WG, I suspect that randy may even know how it's done. 3) I'd bet a small amount of change that Randy has already started the wheels in motion, even before he sent the first message. 4) I suspect that this thread exists to measure the level of interest among the major players.
Now for something on-topic; Yes, Internet PKI, in it's present state, sucks. Yes, there is a need, but the architecture definitely needs a look-see. Personally, I think it grossly inadequate and there ain't no way that it can be made as reliable as DNS, in it's present form. It's basically a poor-man's TLS with about half the fore-thought. Personally, I've been working with X.509 certs as an improvement over basic PGP, but again, the PKI sucks there as well.
But, as a previous poster already brought to surface, the users must have an interest in this service or NONE of the ISPs will be interested in deployment. The reason that existing PKI sucks is mainly a lack of serious user interest. There are NO production-level PKI servers out there today. None of them will commit to an SLA and there are too few customers that will pay the required bucks to support a decent SLA, for a PKI infrastructure. Build it and they will NOT come, yet.
Yes, but they're likely to come... soon... ([likely/perhaps] not [immediately] with the Net wide request though, but in quite various quite relevant contexts).
As usual, this is only an opinion
Mine :) mh
--- R O E L A N D M . J . M E Y E R CEO, Morgan Hill Software Company, Inc. An eCommerce and eBusiness practice providing products and services for the Internet. Tel: (925)373-3954 Fax: (925)373-9781
-- Michael Hallgren, http://m.hallgren.free.fr
participants (13)
-
Adrian Chadd
-
doug@safeport.com
-
Eric M. Carroll
-
John Fraizer
-
L. Sassaman
-
Michael Hallgren
-
Neil J. McRae
-
Peter Francis
-
Randy Bush
-
Rick Irving
-
Roeland M.J. Meyer
-
Roeland Meyer (E-mail)
-
Shawn McMahon