ARIN Policy on IP-based Web Hosting
Gentle readers who might happen to be using unique IP addresses for your Web hosting customers, or for other virtualized services such as FTP, POP3/IMAP, SSL, etc, you need to be aware of ARIN's recent policy change. Basically, they won't give you addresses anymore. They're accepting comments. A lively discussion has begun, as usual. Kevin
Date: Tue, 29 Aug 2000 14:46:37 -0400 (EDT) From: Member Services <memsvcs@arin.net> To: arin-announce@arin.net, ppml@arin.net Subject: ARIN Web Hosting Policy Message-ID: <Pine.SOL.4.05.10008291437410.19186-100000@ops.arin.net> Sender: owner-arin-announce@arin.net
ARIN's new web hosting policy has recently been under discussion on the ARIN IP allocations policy mailing list. See http://www.arin.net/members/mailing.htm.
The policy is described at
http://www.arin.net/announcements/policy_changes.html
Some individuals have expressed their disagreement with this new policy. Should the ARIN web hosting policy be changed?
ARIN would like your feedback on this issue. Please post your comments and suggestions to the public policy mailing list (ppml@arin.net). Your feedback will be included in the discussions at the upcoming public policy meeting.
Information about the meeting can be found at http://www.arin.net/announcements/memmeet.html
Where did you see stuff about FTP/POP3, etc.? I saw lots of stuff about HTTP, which is to be expected, but I didn't see any references to other services. At 6:00 PM -0400 8/29/00, sigma@pair.com wrote:
Gentle readers who might happen to be using unique IP addresses for your Web hosting customers, or for other virtualized services such as FTP, POP3/IMAP, SSL, etc, you need to be aware of ARIN's recent policy change. Basically, they won't give you addresses anymore. They're accepting comments. A lively discussion has begun, as usual.
Kevin
Date: Tue, 29 Aug 2000 14:46:37 -0400 (EDT) From: Member Services <memsvcs@arin.net> To: arin-announce@arin.net, ppml@arin.net Subject: ARIN Web Hosting Policy Message-ID: <Pine.SOL.4.05.10008291437410.19186-100000@ops.arin.net> Sender: owner-arin-announce@arin.net
ARIN's new web hosting policy has recently been under discussion on the ARIN IP allocations policy mailing list. See http://www.arin.net/members/mailing.htm.
The policy is described at
http://www.arin.net/announcements/policy_changes.html
Some individuals have expressed their disagreement with this new policy. Should the ARIN web hosting policy be changed?
ARIN would like your feedback on this issue. Please post your comments and suggestions to the public policy mailing list (ppml@arin.net). Your feedback will be included in the discussions at the upcoming public policy meeting.
Information about the meeting can be found at http://www.arin.net/announcements/memmeet.html
Where did you see stuff about FTP/POP3, etc.? I saw lots of stuff about HTTP, which is to be expected, but I didn't see any references to other services.
It's not uncommon to provide those services as part of the Web hosting bundle. We provide a virtual FTP site to every customer who has our virtual Web hosting service. Many of them rely on it - anonymous FTP sites are still actually quite popular and important to many users. We also provide virtual POP3, virtual IMAP, etc. I know that other providers even go so far as to virtualize Telnet, finger, and probably others. Once you need IPs to virtualize one useful service, the rest may as well be done. There are lots of issues the policy doesn't even begin to consider. Kevin
On Tue, 29 Aug 2000 sigma@pair.com wrote:
Gentle readers who might happen to be using unique IP addresses for your Web hosting customers, or for other virtualized services such as FTP, POP3/IMAP, SSL, etc, you need to be aware of ARIN's recent policy change. Basically, they won't give you addresses anymore. They're accepting comments. A lively discussion has begun, as usual.
ARIN's site says: Where security is a concern, name-based hosting is capable of supporting the transmission of sensitive materials with some servers. ... When an ISP submits a request for IP address space, ARIN will not accept IP-based webhosting as justification for an allocation, unless an exception is warranted. Along with the request, organizations must provide appropriate details demonstrating their virtual webhosting customer base. Exceptions may be made for ISPs that provide justification for requiring static addresses. ARIN will determine, on a case-by-case basis, whether an exception is appropriate. Unless something's changed recently, SSL still requires IP based virtual hosting. Here's a clipping from the c2.net Stronghold FAQ: Should I use name-based or IP-based virtual hosts? Name-based virtual hosts do not work with SSL because certificates are sent before server names are established. Secure virtual hosts must be either IP-based or port-based. IP-based virtual hosts are more convenient, as users would have to remember the port numbers for port-based virtual hosts. ARIN's new policy looks kind of vague to me. I can read it and conclude if I were starting a web hosting company today, and wanted to use "I'm hosting a few thousand web sites, but only have a few dozen actual servers/routers/etc.", I'm not going to qualify for an allocation. But, if I already have a big chunk of space allocated by ARIN, next time I apply for more space, will they look at my IP usage and say "we think you should reuse all those /24's you burned up on web hosting and then come back to us for more space."? That would really suck. ---------------------------------------------------------------------- Jon Lewis *jlewis@lewis.org*| I route System Administrator | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
On Tue, Aug 29, 2000 at 06:43:30PM -0400, jlewis@lewis.org wrote:
Unless something's changed recently, SSL still requires IP based virtual hosting. Here's a clipping from the c2.net Stronghold FAQ:
Should I use name-based or IP-based virtual hosts?
Name-based virtual hosts do not work with SSL because certificates are sent before server names are established. Secure virtual hosts must be either IP-based or port-based. IP-based virtual hosts are more convenient, as users would have to remember the port numbers for port-based virtual hosts.
Nothing has changed. There still is a chicken/egg relationship with trying to do namebased virtual hosts with SSL. You have to know which certificate to present based on the name... and ... you don't know the name until the certificate exchange is complete. Speaking as a application provider who _has_ to have independent sites running SSL per customer, I still need a 1:1 relationship with IP and hosts. ARIN need to take a hit off the clue-pipe before coming down with such a far-right policy. -- Bill Fumerola - Network Architect, BOFH / Chimes, Inc. billf@chimesnet.com / billf@FreeBSD.org
It's a far-*left* policy - "We're ARIN, and we know how best everyone's resources should be allocated." A far-right policy would be "Here are these IPs you've requested; use them as you will, but don't come whining back to us for more because you underestimated your initial request." This would be far preferable. The SSL issue is a real one, and I don't know how to get around it. One would assume that this would qualify as an 'exception'; however, how are they going to verify what you're using them for? Are they going to nmap your networks to see if you're really running SSL on the IPs you've requested? -- ------------------------------------------------------------ Roland Dobbins <rdobbins@netmore.net> // 818.535.5024 voice Bill Fumerola wrote:
On Tue, Aug 29, 2000 at 06:43:30PM -0400, jlewis@lewis.org wrote:
Unless something's changed recently, SSL still requires IP based virtual hosting. Here's a clipping from the c2.net Stronghold FAQ:
Should I use name-based or IP-based virtual hosts?
Name-based virtual hosts do not work with SSL because certificates are sent before server names are established. Secure virtual hosts must be either IP-based or port-based. IP-based virtual hosts are more convenient, as users would have to remember the port numbers for port-based virtual hosts.
Nothing has changed. There still is a chicken/egg relationship with trying to do namebased virtual hosts with SSL.
You have to know which certificate to present based on the name... and ... you don't know the name until the certificate exchange is complete.
Speaking as a application provider who _has_ to have independent sites running SSL per customer, I still need a 1:1 relationship with IP and hosts.
ARIN need to take a hit off the clue-pipe before coming down with such a far-right policy.
-- Bill Fumerola - Network Architect, BOFH / Chimes, Inc. billf@chimesnet.com / billf@FreeBSD.org
Roland Dobbins wrote:
It's a far-*left* policy - "We're ARIN, and we know how best everyone's resources should be allocated."
A far-right policy would be "Here are these IPs you've requested; use them as you will, but don't come whining back to us for more because you underestimated your initial request." This would be far preferable.
The SSL issue is a real one, and I don't know how to get around it. One would assume that this would qualify as an 'exception'; however, how are they going to verify what you're using them for? Are they going to nmap your networks to see if you're really running SSL on the IPs you've requested?
It's irrelevant. I wouldn't mind using name-based hosting, but I have seen some issues with Apache where it doesn't always serve up the correct site. I haven't found a good way to do it at all with IIS, since you must identify an IIS-hosted site by IP address. (Please, someone let me know if there's a way to do name-based hosting with IIS - I'd like to do it!) -- North Shore Technologies, Cleveland, OH http://NorthShoreTechnologies.net Steve Sobol, BOFH - President, Chief Website Architect and Janitor Linux Instructor, PC/LAN Program, Natl. Institute of Technology, Akron, OH sjsobol@NorthShoreTechnologies.net - 888.480.4NET - 216.619.2NET
On Thu, 31 Aug 2000, Steve Sobol wrote:
It's irrelevant. I wouldn't mind using name-based hosting, but I have seen some issues with Apache where it doesn't always serve up the correct site.
Never seen that happen, with a bunch of servers each running up to 4000 NameVirtualHosts under Apache 1.3 on Linux.
I haven't found a good way to do it at all with IIS, since you must identify an IIS-hosted site by IP address. (Please, someone let me know if there's a way to do name-based hosting with IIS - I'd like to do it!)
The only kind of hosting we do here is Host: header based virtualhosting, and by doing that we manage to cram several thousand web sites into a handful of IP addresses. I was under the impression that IPv4 address space was a Good Thing (tm). RIPE have had this same ruling for quite some time, unless I'm mistaken, and contrary to popular belief they do still make allocations to people who ask for them! I've no idea personally what's required to get IIS to do Host: header based vhosting, but I can probably get one our NT bods here to come up with some info if prodded (Steve, drop a mail my way if you still need to know...) We also host SSL, and have to use one IP per site for that. A perfectly reasonable exception. As for monitoring traffic used by web sites we don't bother using IP-based methods there either - Apache generates these cunning things called log files which have details of traffic going via the HTTP daemon. Cunning, huh? ;) It's been a long time since I've seen so many people get so upset over what strikes me as an obvious step which others took a long time ago and has a policy which makes room for worthy exceptions. I'm producing a run of "We Fear Change" tshirts - if anyone wants one, let me know off list. -- Patrick Evans - Sysadmin, bran addict and couch potato pre at pre dot org www.pre.org/pre
On Tue, 29 Aug 2000, Roland Dobbins wrote:
The SSL issue is a real one, and I don't know how to get around it.
It's not the only one. I happen to work for a hosting isp and we have had many discussions on this one. None of my collegues have been able to give me the solution to the following issue: As we all know, cname hosting is based on the same IP. Using the IP of a site could be a form of securitychecking. Let me explain: Imagine we have a site on www.foo.bar with the IP 10.0.0.1. Now we have a customer that wants to put sensitive information on www.foo.bar/customersite. This information would me far more secure if the customer would link to http://10.0.0.1/customersite instead of http://www.foo.bar/customersite. Why? It's simple; imagine some lamer writing a trojan that would change your /etc/hosts or C:\windows\hosts files... It happened to a bank in The Netherlands about 2 years ago (published in the magazine Computer Idee for the dutch readers). I think cname hosting will be unavoidable (is that correct english?) in the next few months but every hosting company should be given enough IP's to offer ip-based hosting too, even if it's nog going to be the standard package... And then we are not even talking about those nice PTR records for a host which (I admit) are purely cosmetic but I think it will be a way of being "cool" if you have a PTR for your A when cname hosting get's The Usual Way of business. -- Sabri Berisha `~*-[vuurwerk internet] bofv-ripe@whois.ripe.net Linux / FreeBSD Scriptkiddo hoping-to-be-ccnp-soon my personal opinion yadda yadda
From the ARIN site:
"The name-based system of virtual webhosting used by many ISPs today allows multiple domains to be hosted by a single IP address. While some organizations use IP-based webhosting to, in part, justify their requests for IP space, ARIN will no longer accept IP-based hosting as justification for an allocation unless an exception is warranted. The ARIN Instructions for Using Name-based Virtual Webhosting may be a helpful tool in setting up, converting to, and using name-based hosting." Name-based virtual hosting does not work in many, MANY cases. Beyond this, if you have multiple customers on a single IP address and one of them is an idiot and spamvertizes their website, several providers have a policy of nullrouting the /32. Now, not only does it kill a single site but, potentially hundreds! --- John Fraizer EnterZone, Inc
Name-based virtual hosting does not work in many, MANY cases. Beyond
And it doesn't work for POP3 at all. If you give your customers their own pop3 server, you will need to bind to a different IP for each customer. I don't know of any way around that. Same goes for ftp as far as I know. -joe
2000-08-29-21:25:09 Joseph McDonald:
Name-based virtual hosting does not work in many, MANY cases.
And it doesn't work for POP3 at all.
It can. Just give your users POP logins of the form user@domain.name.
If you give your customers their own pop3 server, you will need to bind to a different IP for each customer. I don't know of any way around that.
Don't give them each their own pop3 server, just give them distinct accounts per virtual domain on the same pop3 server.
Same goes for ftp as far as I know.
ftp can't be name-virtual-hosted. It is also such a wretched protocol that it urgently needs to be retired in all settings for all purposes. The only real excuse I'd argue for keeping IP virtual hosts is https --- but as there's no chance of a secure replacement for HTTP that works with name virtual hosts getting deployed any time soon, and as the last legal barrier to universal deployment of https is falling in just a month, I think ARIN has picked a remarkably unfortunate time to launch this crusade. If they'd done it a couple of years ago, maybe it would have helped to nudge some of the folks who just never bothered to learn how to configure name virtual hosts into shifting a bit, and possibly this could have helped provide motivation for designing something better than the current https, like e.g. a TLS negotiation within http, and maybe we could be approaching the point where such an improved client might be widely-enough available. But now there's no helping it, IP virtual hosts are the primary webserving product for the next bit of a while anyway. -Bennett
Name-based virtual hosting does not work in many, MANY cases.
And it doesn't work for POP3 at all.
It can. Just give your users POP logins of the form user@domain.name.
Just bear in mind the restriction in the POP3 RFC (1725) that limits arguments to commands to 40 characters. Also, if you have a lot of clients using legacy software, you will find that some old versions of software won't allow usernames including an '@' character, and other software enforces length limits of less than 40 characters. --- Andrew McNamara (System Architect) connect.com.au Pty Ltd Lvl 3, 213 Miller St, North Sydney, NSW 2060, Australia Phone: +61 2 9409 2117, Fax: +61 2 9409 2111
On Wed, Aug 30, 2000 at 02:26:20PM +1100, Andrew McNamara wrote:
Name-based virtual hosting does not work in many, MANY cases.
And it doesn't work for POP3 at all.
It can. Just give your users POP logins of the form user@domain.name.
Just bear in mind the restriction in the POP3 RFC (1725) that limits arguments to commands to 40 characters.
RFC1725 is obsoleted by 1939, which indeed has the same limit. 1939, however, is updated by 2449, which moves the limit to 255 characters for a POP3 command line, leaving 248 chars for the username (a little less with APOP).
Also, if you have a lot of clients using legacy software, you will find that some old versions of software won't allow usernames including an '@' character, and other software enforces length limits of less than 40 characters.
Anybody doing user@domain.name logins with POP3 is smart enough to support other separators than @. Even current versions of Netscape strip off all past and including the @ because it thinks the user is stupid and entered his/her complete e-mail address instead of just a username. 40 characters also shouldn't be a real problem even if enforced, most email-adresses fit under 40 chars. I have never heard of anybody running into problems because of length-enforcements on POP3-usernames, and I am on several mailinglist relating to the subject (where other people use this trick too). Greetz, Peter. -- [ircoper] petervd@vuurwerk.nl - Peter van Dijk / Hardbeat [student] Undernet:#groningen/wallops | IRCnet:/#alliance [developer] _____________ [disbeliever - the world is backwards] (__VuurWerk__(--*-
On Tue, Aug 29, 2000 at 10:43:08PM -0400, Bennett Todd wrote: [snip]
Same goes for ftp as far as I know.
ftp can't be name-virtual-hosted. It is also such a wretched protocol that it urgently needs to be retired in all settings for all purposes.
Theoretically, you could do the same with ftp as with pop3 - use usernames like 'user@domain.com'. Greetz, Peter. -- [ircoper] petervd@vuurwerk.nl - Peter van Dijk / Hardbeat [student] Undernet:#groningen/wallops | IRCnet:/#alliance [developer] _____________ [disbeliever - the world is backwards] (__VuurWerk__(--*-
Theoretically, you could do the same with ftp as with pop3 - use usernames like 'user@domain.com'.
Not for anonymous FTP. The username provided by every FTP client out there is "ftp" or "anonymous", nothing more. FTP is by no means wretched. It's still in widespread use, and with good reason. Kevin
On Wed, 30 Aug 2000 sigma@pair.com wrote:
Theoretically, you could do the same with ftp as with pop3 - use usernames like 'user@domain.com'.
Not for anonymous FTP. The username provided by every FTP client out there is "ftp" or "anonymous", nothing more.
FTP is by no means wretched. It's still in widespread use, and with good reason.
On the contrary, it is wretched. It sends secrets in clear text. You can do nifty things like bounce port probes through some FTPD's because of the data channel (a hold over from the days of NCP, before TCP). FTPD is easy to configure incorrectly, severely compromising the security of the system running it (this is more admin error, I know). I'm sure there are others who can add to the list. However, with that said, I'd like to state that discussing the merits of the File Transfer Protocol is only vaguely operational before someone else does. I'll be more than happy to continue this conversation off-list though. __ Joseph W. Shaw - jshaw@insync.net
2000-08-30-04:01:32 Peter van Dijk:
On Tue, Aug 29, 2000 at 10:43:08PM -0400, Bennett Todd wrote:
ftp can't be name-virtual-hosted. It is also such a wretched protocol that it urgently needs to be retired in all settings for all purposes.
Theoretically, you could do the same with ftp as with pop3 - use usernames like 'user@domain.com'.
That doesn't help the perceived need among folks who insist on supporting ftp; if they are selling a website www.naughtybits.dom then they want to have a related ftp server ftp.naughtybits.dom. As there's no way for the ftp server to tell which hostname the client used to reach the IP addr, name virtual hosting doesn't make this possible. The big difference between pop (where logging in with name@example.dom works fine for name virtual hosting) and ftp is that pop is a private service, where some folks like to use ftp as a public service without authentication. I'd still put little weight on folks advocating and encouraging the use ftp, but the same point can [sadly] be made much more effectively with https. IP virtual hosts are where it's at for the time being. -Bennett
Bennett Todd (bet@rahul.net) wrote:
2000-08-30-04:01:32 Peter van Dijk:
On Tue, Aug 29, 2000 at 10:43:08PM -0400, Bennett Todd wrote:
ftp can't be name-virtual-hosted. It is also such a wretched protocol that it urgently needs to be retired in all settings for all purposes.
Theoretically, you could do the same with ftp as with pop3 - use usernames like 'user@domain.com'.
That doesn't help the perceived need among folks who insist on supporting ftp; if they are selling a website www.naughtybits.dom then they want to have a related ftp server ftp.naughtybits.dom. As there's no way for the ftp server to tell which hostname the client used to reach the IP addr, name virtual hosting doesn't make this possible.
The big difference between pop (where logging in with name@example.dom works fine for name virtual hosting) and ftp is that pop is a private service, where some folks like to use ftp as a public service without authentication.
I'd still put little weight on folks advocating and encouraging the use ftp, but the same point can [sadly] be made much more effectively with https. IP virtual hosts are where it's at for the time being.
And lets not forget traffic accounting as well. ;-) -- ------------------------------------------------------------------------------ Ron Rosson ... and a UNIX user said ... The InSaNe One rm -rf * insane@oneinsane.net and all was /dev/null and *void() ------------------------------------------------------------------------------ Guns don't kill people, loony professors kill people.
On Wed, 30 Aug 2000, Ron 'The InSaNe One' Rosson wrote:
The big difference between pop (where logging in with name@example.dom works fine for name virtual hosting) and ftp is that pop is a private service, where some folks like to use ftp as a public service without authentication.
I'd still put little weight on folks advocating and encouraging the use ftp, but the same point can [sadly] be made much more effectively with https. IP virtual hosts are where it's at for the time being.
And lets not forget traffic accounting as well. ;-)
I think I remember Top Layer saying they dig deep enough into the packet payload that they can still do flow accounting on name-based transactions. However, I may be totally wrong on that. Cisco netflow is a different story. __ Joseph W. Shaw - jshaw@insync.net
On Wed, Aug 30, 2000 at 10:01:32AM +0200, Peter van Dijk wrote:
Theoretically, you could do the same with ftp as with pop3 - use usernames like 'user@domain.com'.
Practically, however, you can't expect that to work with all the many programs that automatically handle anonymous logins; it wouldn't be practical to hack all those clients to do "anonymous@foo.bar" where foo.bar is some arbitrary portion of the address typed.
Bennett;
Same goes for ftp as far as I know.
ftp can't be name-virtual-hosted. It is also such a wretched protocol that it urgently needs to be retired in all settings for all purposes.
The only real excuse I'd argue for keeping IP virtual hosts is
Excuse? Why? I'm afraid some of you, including ARIN, are assuming, that IPv4 address space will last forever, if ARIN allocate the space cautiously. But, IPv4 address space will be used up, sooner or later certainly before anonymous ftp become obsoleted and, perhaps, a lot sooner than most of you expect and Note that there is no requirement to preserve IPv4 address space forever and the only requirement is to preserve IPv4 address space until we are ready for IPv6. However, the effort not to allocate enough IPv4 address space to satisfy ISP requirements make name virtual hosts and NAT popular, which, then, let people think IPv4 address space last forever, which motivate ISPs delay the deployment of IPv6. So, when we really use up the IPv4 address space, ISPs will not be ready for IPv6. The only reasonable solution for the problem, it seems to me, is to assign a lot of IPv4 address space to good ISPs (good means various things including that they are ready for IPv6) and let all the ISPs realize the space will be used up soon. Masataka Ohta
On Tue, 29 Aug 2000, Joseph McDonald wrote:
Name-based virtual hosting does not work in many, MANY cases. Beyond
And it doesn't work for POP3 at all. If you give your customers their own pop3 server, you will need to bind to a different IP for each customer. I don't know of any way around that.
Same goes for ftp as far as I know.
It has been argued by people far smarter than I that FTP needs to disappear, and I tend to agree with them. Not that I think that file transfers should no longer take place, but FTP has certainly outlived it's usefulness in today's networks and should be replaced with something a bit more robust/secure. I think SCP is certainly a start in the right direction. With a few modifications, it could certainly be an adequate drop-in replacement. -- Joseph W. Shaw - jshaw@insync.net
The problem is that SCP is several orders of magnitude slower then FTP. I use scp, rsync (on top of ssh), nfs, and several other methods of moving files around, and ftp blows them all away. You also need to build a ftp like structure on top of it. ie: I pick the files I want instead of having to know the filenames. Until this happens, I can see no viable alternative to FTP. I wouldn't be unhappy to see one, but we can't retire it without a good replacement that performs nearly as well. Jason --- Jason Slagle - CCNA - CCDA Network Administrator - Toledo Internet Access - Toledo Ohio - raistlin@tacorp.net - jslagle@toledolink.com - WHOIS JS10172 -----BEGIN GEEK CODE BLOCK----- Version: 3.12 GE d-- s:+ a-- C++ UL+++ P--- L+++ E- W- N+ o-- K- w--- O M- V PS+ PE+++ Y+ PGP t+ 5 X+ R tv+ b+ DI+ D G e+ h! r++ y+ ------END GEEK CODE BLOCK------ On Wed, 30 Aug 2000, Joe Shaw wrote:
It has been argued by people far smarter than I that FTP needs to disappear, and I tend to agree with them. Not that I think that file transfers should no longer take place, but FTP has certainly outlived it's usefulness in today's networks and should be replaced with something a bit more robust/secure. I think SCP is certainly a start in the right direction. With a few modifications, it could certainly be an adequate drop-in replacement.
On Thu, 31 Aug 2000, Jason Slagle wrote:
The problem is that SCP is several orders of magnitude slower then FTP. I use scp, rsync (on top of ssh), nfs, and several other methods of moving files around, and ftp blows them all away.
You also need to build a ftp like structure on top of it. ie: I pick the files I want instead of having to know the filenames.
Until this happens, I can see no viable alternative to FTP.
HTTP, perchance? The only things missing are a machine-parsable file indexing method (which would be easy enough to standardize on if someone felt the need to do so; think a "text/directory" MIME type, which would benefit more than just HTTP, or use a multipart list of URLs), and server-to-server transfers coordinated from your client, which most people have disabled anyway for security reasons. But, you get the added benefit of MIME typing, human-beneficial markup, caching if you have a nearby cache, inherent firewall friendliness (no data connection foolishness), and simple negotiation of encrypted transfers (SSL). And for command-line people like myself, there's lynx, w3m, and wget. FTP is disturbingly behind on features, some of which (decent certificate authentication, full-transaction encryption, data type labelling, and cache usefulness) are becoming more important today. Either the FTP RFC needs a near-complete overhaul, or the HTTP and other RFCs need to be updated to include the missing functionality. -- Edward S. Marshall <emarshal@logic.net> http://www.nyx.net/~emarshal/ ------------------------------------------------------------------------------- [ Felix qui potuit rerum cognoscere causas. ]
Until this happens, I can see no viable alternative to FTP.
HTTP, perchance? The only things missing are a machine-parsable file indexing method (which would be easy enough to standardize on if someone felt the need to do so; think a "text/directory" MIME type, which would benefit more than just HTTP, or use a multipart list of URLs), and server-to-server transfers coordinated from your client, which most people have disabled anyway for security reasons.
But, you get the added benefit of MIME typing, human-beneficial markup, caching if you have a nearby cache, inherent firewall friendliness (no data connection foolishness), and simple negotiation of encrypted transfers (SSL). And for command-line people like myself, there's lynx, w3m, and wget.
http is a good idea, but... "mime typing"? i don't want a program that's gonna tell me what i have to do with my data, or with whihc program i will have to open it later. my data belongs in a file, exactly as i requested it. with the appropriate line-termination, of course, which http doesn't do. ""human-beneficial markup"? you just said we need a "machine-parsable file indexing method". what do we need humans for? caching usually gets in the way. "no data connection foolishness" translates to no way to abort a connection other than by dropping it, reconnecting, and exchanging authenticators again. highly inefficient.
FTP is disturbingly behind on features, some of which (decent certificate authentication, full-transaction encryption, data type labelling, and cache usefulness) are becoming more important today. Either the FTP RFC needs a near-complete overhaul, or the HTTP and other RFCs need to be updated to include the missing functionality.
two things would improve ftp: some sort of tls for the control channel (and maybe the data channel as well), and kernel support for the data channel. all i mean by this is the ftpd opens the file to be sent, the socket to which the data needs to be read, and passes them both to the kernel saying "splice these together, would you?" no more kernel- to-userland-and-back copies, the kernel will *know* when the receiver can take more data, and the ftpd can "abor" whenever it needs to by closing the data socket. this feature would, of course, be mutually exclusive with the optional encryption. -- |-----< "CODE WARRIOR" >-----| codewarrior@daemon.org * "ah! i see you have the internet twofsonet@graffiti.com (Andrew Brown) that goes *ping*!" andrew@crossbar.com * "information is power -- share the wealth."
On Thu, Aug 31, 2000 at 09:34:43AM -0400, Andrew Brown wrote: [snip]
two things would improve ftp: some sort of tls for the control channel (and maybe the data channel as well), and kernel support for the data channel. all i mean by this is the ftpd opens the file to be sent, the socket to which the data needs to be read, and passes them both to the kernel saying "splice these together, would you?" no more kernel- to-userland-and-back copies, the kernel will *know* when the receiver can take more data, and the ftpd can "abor" whenever it needs to by closing the data socket. this feature would, of course, be mutually exclusive with the optional encryption.
That feature has been present in Linux and FreeBSD for some time now. It's called 'sendfile'. Greetz, Peter. -- [ircoper] petervd@vuurwerk.nl - Peter van Dijk / Hardbeat [student] Undernet:#groningen/wallops | IRCnet:/#alliance [developer] _____________ [disbeliever - the world is backwards] (__VuurWerk__(--*-
Also sprach Andrew Brown
http is a good idea, but...
"mime typing"? i don't want a program that's gonna tell me what i have to do with my data, or with whihc program i will have to open it later.
Where on earth did you get the idea that mime typing requires all that? The mime type is just one side telling the other side what it thinks is in the file (and giving some other nice little benefits like encoding transformations and stuff). What you do with the file once you get it doesn't have anything to do with the mime type unless *you* configure your program to pay attention to the mime type and do something with it. If you want to tell your program to open a postscript file in RealMedia Player...so be it, you can do that...not to much effect I wouldn't think, but that's totally up to you to do it. If you want to set your program so every mime type just dumps out to a file, you can do that too.
my data belongs in a file, exactly as i requested it. with the appropriate line-termination, of course, which http doesn't do.
Again, conversion of line termination is controlled by the end system, not the protocol itself. FTP happens to have defined in the RFCs how it should be handled...there's no reason you can't do the same process with http-received files.
""human-beneficial markup"? you just said we need a "machine-parsable file indexing method". what do we need humans for?
Because, while I sometimes write programs/scripts that go out and parse directories and get the files I want, I *very* often manually go in and get individual files that I desire...any setup like this needs to support both.
caching usually gets in the way.
Only if done wrong (which, regrettably, is most of the time from what I've seen) -- Jeff McAdams Email: jeffm@iglou.com Head Network Administrator Voice: (502) 966-3848 IgLou Internet Services (800) 436-4456
http is a good idea, but...
"mime typing"? i don't want a program that's gonna tell me what i have to do with my data, or with whihc program i will have to open it later.
Where on earth did you get the idea that mime typing requires all that?
not that it requires it, but that it usually happens. if i get a file via http with a mime type that the browser "knows what to do with", i usually don't get a choice. likewise if i get a file via email with a octet-stream mime type, most readers won't show it to me by default.
The mime type is just one side telling the other side what it thinks is in the file (and giving some other nice little benefits like encoding transformations and stuff). What you do with the file once you get it doesn't have anything to do with the mime type unless *you* configure your program to pay attention to the mime type and do something with it. If you want to tell your program to open a postscript file in RealMedia Player...so be it, you can do that...not to much effect I wouldn't think, but that's totally up to you to do it. If you want to set your program so every mime type just dumps out to a file, you can do that too.
the mime type is made up, usually based on the file's extension, which is, of course, passed along with the contents of the file when you transfer it. it's no extra information in this context.
my data belongs in a file, exactly as i requested it. with the appropriate line-termination, of course, which http doesn't do.
Again, conversion of line termination is controlled by the end system, not the protocol itself. FTP happens to have defined in the RFCs how it should be handled...there's no reason you can't do the same process with http-received files.
when you initiate an "ascii" mode transfer, the remote ftpd translates the file from local line termination to network oriented line termination (ie, cr, or crlf or lf becomes crlf). your ftp client then translates back to local line termination from network line termination. this is how you can transfer files between windows machines, macs, and unix boxes, and always end up with something you can read locally. http doesn't do this. -- |-----< "CODE WARRIOR" >-----| codewarrior@daemon.org * "ah! i see you have the internet twofsonet@graffiti.com (Andrew Brown) that goes *ping*!" andrew@crossbar.com * "information is power -- share the wealth."
On Thu, 31 Aug 2000, Andrew Brown wrote:
the mime type is made up, usually based on the file's extension, which is, of course, passed along with the contents of the file when you transfer it. it's no extra information in this context.
What's a file extension? -- Alex Kamantauskas alexk@tugger.net
the mime type is made up, usually based on the file's extension, which is, of course, passed along with the contents of the file when you transfer it. it's no extra information in this context.
What's a file extension?
it's the last alphanumeric token in a string delimited by dots. for example: .com means internet/commercial .net means internet/networking .org means internet/non-prof-org .gov means internet/us-fed-gov .edu means internet/four-year-degree-granting-educational-instition oh...never mind. :) -- |-----< "CODE WARRIOR" >-----| codewarrior@daemon.org * "ah! i see you have the internet twofsonet@graffiti.com (Andrew Brown) that goes *ping*!" andrew@crossbar.com * "information is power -- share the wealth."
Andrew Brown wrote:
the mime type is made up, usually based on the file's extension, which is, of course, passed along with the contents of the file when you transfer it. it's no extra information in this context.
What's a file extension?
it's the last alphanumeric token in a string delimited by dots.
And it doesn't exist anywhere except Windows. -- North Shore Technologies, Cleveland, OH http://NorthShoreTechnologies.net Steve Sobol, BOFH - President, Chief Website Architect and Janitor Linux Instructor, PC/LAN Program, Natl. Institute of Technology, Akron, OH sjsobol@NorthShoreTechnologies.net - 888.480.4NET - 216.619.2NET
On Thu, Aug 31, 2000 at 12:08:06PM -0400, Steve Sobol wrote:
Andrew Brown wrote:
the mime type is made up, usually based on the file's extension, which is, of course, passed along with the contents of the file when you transfer it. it's no extra information in this context.
What's a file extension?
it's the last alphanumeric token in a string delimited by dots.
And it doesn't exist anywhere except Windows.
untrue. just think how crippled make(1) would be without extensions. -- [ Jim Mercer jim@reptiles.org +1 416 410-5633 ] [ Reptilian Research -- Longer Life through Colder Blood ] [ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]
Jim Mercer wrote:
it's the last alphanumeric token in a string delimited by dots.
And it doesn't exist anywhere except Windows.
untrue.
just think how crippled make(1) would be without extensions.
DOS and Windows do NOT consider the extension to be part of the filename. In DOS, make.exe is made up of the filename "make", and the extension "exe". In Unix and MacOS, "make.exe" represents a filename "make.exe". The concept of the file extension doesn't exist as it does in DOS. Now that I think about it, though, VMS has file extensions, doesn't it? (Been a while since I last used VMS.) I don't know if the extensions are treated the way DOS and Windows treats extensions, though. -- North Shore Technologies, Cleveland, OH http://NorthShoreTechnologies.net Steve Sobol, BOFH - President, Chief Website Architect and Janitor Linux Instructor, PC/LAN Program, Natl. Institute of Technology, Akron, OH sjsobol@NorthShoreTechnologies.net - 888.480.4NET - 216.619.2NET
Steve Sobol wrote:
Jim Mercer wrote:
it's the last alphanumeric token in a string delimited by dots.
And it doesn't exist anywhere except Windows.
untrue.
just think how crippled make(1) would be without extensions.
DOS and Windows do NOT consider the extension to be part of the filename. In DOS, make.exe is made up of the filename "make", and the extension "exe". In Unix and MacOS, "make.exe" represents a filename "make.exe". The concept of the file extension doesn't exist as it does in DOS.
Now that I think about it, though, VMS has file extensions, doesn't it? (Been a while since I last used VMS.) I don't know if the extensions are treated the way DOS and Windows treats extensions, though.
--
In VMS (whihc is what DOS is based on AFAIK) the file name is name.ext;N (where N is the version number, which is incremented each time you save the file). Made for very easy simple minded configuration control, so that foo.exe;23 is the 23rd executable of the program foo.c
North Shore Technologies, Cleveland, OH http://NorthShoreTechnologies.net Steve Sobol, BOFH - President, Chief Website Architect and Janitor Linux Instructor, PC/LAN Program, Natl. Institute of Technology, Akron, OH sjsobol@NorthShoreTechnologies.net - 888.480.4NET - 216.619.2NET
-- Regards Marshall Eubanks T.M. Eubanks Multicast Technologies, Inc 10301 Democracy Lane, Suite 410 Fairfax, Virginia 22030 Phone : 703-293-9624 Fax : 703-293-9609 e-mail : tme@on-the-i.com tme@multicasttech.com http://www.on-the-i.com http://www.buzzwaves.com
Hello Marshall , Um quite contrare DOS was based on CPM which in turn was based on RT-11 . There was even a BIG law suit against M$ about that one which CPM lost (IIRC) . Hth, JimL On Thu, 31 Aug 2000, Thomas Marshall Eubanks wrote:
it's the last alphanumeric token in a string delimited by dots. And it doesn't exist anywhere except Windows. untrue. just think how crippled make(1) would be without extensions. DOS and Windows do NOT consider the extension to be part of
Jim Mercer wrote: the filename. In DOS, make.exe is made up of the filename "make", and the extension "exe". In Unix and MacOS, "make.exe" represents a filename "make.exe". The concept of the file extension doesn't exist as it does in DOS. Now that I think about it, though, VMS has file extensions, doesn't it? (Been a while since I last used VMS.) I don't know if the extensions are treated the way DOS and Windows treats extensions, though. In VMS (whihc is what DOS is based on AFAIK)
Steve Sobol wrote: the file name is name.ext;N (where N is the version number, which is incremented each time you save the file). Made for very easy simple minded configuration control, so that foo.exe;23 is the 23rd executable of the program foo.c
+----------------------------------------------------------------+ | James W. Laferriere | System Techniques | Give me VMS | | Network Engineer | 25416 22nd So | Give me Linux | | babydr@baby-dragons.com | DesMoines WA 98198 | only on AXP | +----------------------------------------------------------------+
the mime type is made up, usually based on the file's extension, which is, of course, passed along with the contents of the file when you transfer it. it's no extra information in this context.
What's a file extension?
it's the last alphanumeric token in a string delimited by dots.
And it doesn't exist anywhere except Windows.
...which is, of course, why every web server under the sun uses them. :) -- |-----< "CODE WARRIOR" >-----| codewarrior@daemon.org * "ah! i see you have the internet twofsonet@graffiti.com (Andrew Brown) that goes *ping*!" andrew@crossbar.com * "information is power -- share the wealth."
Also sprach Andrew Brown
Where on earth did you get the idea that mime typing requires all that?
not that it requires it, but that it usually happens. if i get a file via http with a mime type that the browser "knows what to do with", i usually don't get a choice. likewise if i get a file via email with a octet-stream mime type, most readers won't show it to me by default.
Usually, but not always, and it is a client config issue. wget, for example, dumps to a file regardless of the mime type I believe (I'm not intimately familiar with wget, but is the behavior I've seen from what little I've used it)
the mime type is made up, usually based on the file's extension, which is, of course, passed along with the contents of the file when you transfer it. it's no extra information in this context.
Again, usually, but not always. And seeing as how many systems *cough*Microsoft*cough* behave so poorly when the mime type and file extension disagree, it seems we have some more real work to be done in this area...I won't argue that things are perfect, but the possibilities are there and you seem to be trying to shoot them down because things usually aren't done that way currently. So much for innovation using that logic. :)
when you initiate an "ascii" mode transfer, the remote ftpd translates the file from local line termination to network oriented line termination (ie, cr, or crlf or lf becomes crlf). your ftp client then translates back to local line termination from network line termination. this is how you can transfer files between windows machines, macs, and unix boxes, and always end up with something you can read locally.
And this could *easily* be implimented in a web server and web client. Might I suggest a new mime type (are we eventually going to have to worry about mime type namespace issues?) indicating that this process is going on so that the client side can be told that this is what's going on? Perhaps text/plain-network, or something like that.
http doesn't do this.
Currently...it could easily do so. I don't think that its necessary for the protocol to define this, when it could easily be handled by *gasp* indicating this process in the mime type, or perhaps even the transfer-encoding...just thought about that...haven't given it much though yet. :) -- Jeff McAdams Email: jeffm@iglou.com Head Network Administrator Voice: (502) 966-3848 IgLou Internet Services (800) 436-4456
Andrew Brown wrote:
the mime type is made up, usually based on the file's extension, which is, of course, passed along with the contents of the file when you transfer it. it's no extra information in this context.
Depends on the server. On systems that have OS-level file typing (like MacOS - where a filename has no relationship whatsoever to file type), the MIME type should be derived from the OS-level file-type. On systems that don't have OS-level file typing, and don't rely heavily on file extensions (like UNIX), it would make sense for the server to base a MIME type on something more than just the filename - perhaps involving the file's content - similar to what the file(1) command does. On systems where a file's name (extension, prefix, whatever) is the primary means for determining a file's type (like Windows), it may make sense for the server to base the MIME type on the name alone. Of course, there are (and probably always will be) broken servers for all systems. -- David
On Thu, Aug 31, 2000 at 08:22:52AM -0500, Edward S. Marshall wrote:
But, you get the added benefit of MIME typing, human-beneficial markup, caching if you have a nearby cache, inherent firewall friendliness (no data connection foolishness), and simple negotiation of encrypted transfers (SSL). And for command-line people like myself, there's lynx, w3m, and wget.
And you lose the benefit of client-side choice as to binary or ascii transfer mode. Not that this isn't outweighed by all the things you gain, in most cases.
--- Jason Slagle - CCNA - CCDA Network Administrator - Toledo Internet Access - Toledo Ohio - raistlin@tacorp.net - jslagle@toledolink.com - WHOIS JS10172 -----BEGIN GEEK CODE BLOCK----- Version: 3.12 GE d-- s:+ a-- C++ UL+++ P--- L+++ E- W- N+ o-- K- w--- O M- V PS+ PE+++ Y+ PGP t+ 5 X+ R tv+ b+ DI+ D G e+ h! r++ y+ ------END GEEK CODE BLOCK------ On Thu, 31 Aug 2000, Edward S. Marshall wrote:
HTTP, perchance? The only things missing are a machine-parsable file indexing method (which would be easy enough to standardize on if someone felt the need to do so; think a "text/directory" MIME type, which would benefit more than just HTTP, or use a multipart list of URLs), and server-to-server transfers coordinated from your client, which most people have disabled anyway for security reasons.
You know, I almost challenged someone to suggest HTTP in my original posting but decided not to as I didn't think anyone would. Not all of us enjoy point and clicky interfaces (Even lynx in this context is point and clicky). I don't see the ability to implement the functionality of command line FTP (Yes, I know it COULD be done, but it is a hack at best. ls = transfer directory listing. And what of ASCII and BINARY data types. ASCII transfers still have use.)
But, you get the added benefit of MIME typing
I don't consider this a benefit as I already have enough problems with HTTP servers having the wrong mime type for .gz. I'm TRANSFERING a file. The mime type is mostly irrelevant there. What do I need to know if it's a image/jpeg if I'm just transfering it to my local drive. The MIME type is only beneficial if your attempting to do something with the file after receipt. If I was I'd be using HTTP or another protocol. I'm not, I'm transfering it.
human-beneficial markup
Once again, of no use. I don't want thumbnails if I ls -l a directory of jpg's.
caching if you have a nearby cache
Theres no reason I can't cache it now. Squid manages to (Granted it's taking ftp:// URL's, but you could hack up a "real" ftp proxy to cache.
inherent firewall friendliness
Point taken here.
simple negotiation of encrypted transfers (SSL)
Here also.
And for command-line people like myself, there's lynx, w3m, and wget.
While I've never used w3m, lynx and wget lack the functionality of just about any FTP client. When I want to choose from a list of files and "click" on the one I want I already use lynx with ftp:// url's. But, often I don't want to do that as I'm transfering multiple files.
FTP is disturbingly behind on features, some of which (decent certificate authentication, full-transaction encryption, data type labelling, and cache usefulness) are becoming more important today. Either the FTP RFC needs a near-complete overhaul, or the HTTP and other RFCs need to be updated to include the missing functionality.
I'm not arguing that it isn't. I'm just saying that until a NEW protocol comes out, or someone overhauls the existing FTP protocol, you can't scrap it as nothing I have found duplicates the functionality I want in an FTP client. wget comes close for http, but I have to know what I want beforehand. The solution (As several people have emailed me to say) may be sftp or scp with encryption and compression off when not needed (Which I confirmed is nearly as fast as FTP). I'd be willing to work on hammering out a new and "improved" FTP protocol if several others are interested. Jason
[ On Thursday, August 31, 2000 at 10:33:41 (-0400), Jason Slagle wrote: ]
Subject: RE: ARIN Policy on IP-based Web Hosting
You know, I almost challenged someone to suggest HTTP in my original posting but decided not to as I didn't think anyone would.
Not all of us enjoy point and clicky interfaces (Even lynx in this context is point and clicky).
The NetBSD 'ftp' client can use HTTP for file retrieval just fine.... Even through a proxy/cache.... -- Greg A. Woods +1 416 218-0098 VE3TCP <gwoods@acm.org> <robohack!woods> Planix, Inc. <woods@planix.com>; Secrets of the Weird <woods@weird.com>
(Last post on the subject; this has nothing to do with NANOG anymore. ;-) On Thu, 31 Aug 2000, Jason Slagle wrote:
Not all of us enjoy point and clicky interfaces (Even lynx in this context is point and clicky).
HTTP has nothing to do with "point and clicky". It's a transport protocol. You're thinking of a particular client implementation.
I don't see the ability to implement the functionality of command line FTP (Yes, I know it COULD be done, but it is a hack at best. ls = transfer directory listing.
No, HTTP has the chance to do directory listings in a manner which *isn't* a hack; using /bin/ls for a directory listing is a client parsing mess. Using something like multipart MIME digests full of URLs would be a perfect answer to directory listings, and addition of metadata through various "implementation-defined" headers would be trivial, potentially giving you even more per-reference information than you can get now with FTP.
And what of ASCII and BINARY data types. ASCII transfers still have use.)
This sounds suspiciously like Content-Transfer-Encoding.
I don't consider this a benefit as I already have enough problems with HTTP servers having the wrong mime type for .gz. I'm TRANSFERING a file. The mime type is mostly irrelevant there. What do I need to know if it's a image/jpeg if I'm just transfering it to my local drive.
That's a client problem, one which isn't an issue with (for example) wget; all it does is grab the data at the URL you requested, and saves it to the filename it extrapolates from the URL, or to the filename you specify. Sorta like an FTP client. :-)
The MIME type is only beneficial if your attempting to do something with the file after receipt. If I was I'd be using HTTP or another protocol. I'm not, I'm transfering it.
Then the addition of data type information is of no use to you. It's also of no hindrance.
human-beneficial markup
Once again, of no use. I don't want thumbnails if I ls -l a directory of jpg's.
"ls -l" *is* a form of human-readable markup. See above about handling of directory listings.
caching if you have a nearby cache
Theres no reason I can't cache it now. Squid manages to (Granted it's taking ftp:// URL's, but you could hack up a "real" ftp proxy to cache.
Wide-scale transparent FTP caching frightens me. Think of all the authentication information your cache would be collecting. Then think of all the possibilities for screw-up in the translation. We spent long enough getting transparent HTTP proxying semi-correct; I'd rather not go through that again. ;-)
When I want to choose from a list of files and "click" on the one I want I already use lynx with ftp:// url's. But, often I don't want to do that as I'm transfering multiple files.
wget --mirror http://server/path/ -- Edward S. Marshall <emarshal@logic.net> http://www.nyx.net/~emarshal/ ------------------------------------------------------------------------------- [ Felix qui potuit rerum cognoscere causas. ]
From the ARIN site:
"The name-based system of virtual webhosting used by many ISPs today allows multiple domains to be hosted by a single IP address. While some organizations use IP-based webhosting to, in part, justify their requests for IP space, ARIN will no longer accept IP-based hosting as justification for an allocation unless an exception is warranted. The ARIN Instructions for Using Name-based Virtual Webhosting may be a helpful tool in setting up, converting to, and using name-based hosting."
Name-based virtual hosting does not work in many, MANY cases. Beyond this, if you have multiple customers on a single IP address and one of them is an idiot and spamvertizes their website, several providers have a policy of nullrouting the /32. Now, not only does it kill a single site but, potentially hundreds!
--- John Fraizer EnterZone, Inc
That might be an advantage in fighting spam. Think of the potential lawsuits against the hosting company and the spammer for loss of business from the spam block. The hosting company should not have accepted a spammer or as soon as they are known to be a spammer, they must take action to prevent loss of business to the non-spammer customers ;) -- Richard Shetron multics@ruserved.com multics@acm.rpi.edu NO UCE What is the Meaning of Life? There is no meaning, It's just a consequence of complex carbon based chemistry; don't worry about it The Super 76, "Free Aspirin and Tender Sympathy", Las Vegas Strip.
On Tue, 29 Aug 2000 multics@ruserved.com wrote:
Name-based virtual hosting does not work in many, MANY cases. Beyond this, if you have multiple customers on a single IP address and one of them is an idiot and spamvertizes their website, several providers have a policy of nullrouting the /32. Now, not only does it kill a single site but, potentially hundreds!
--- John Fraizer EnterZone, Inc
That might be an advantage in fighting spam. Think of the potential lawsuits against the hosting company and the spammer for loss of business from the spam block. The hosting company should not have accepted a spammer or as soon as they are known to be a spammer, they must take action to prevent loss of business to the non-spammer customers ;)
You've obviously not been in this situation before. Just how are you to know what a customer is going to do prior to turning them on? Do you have access to mindreaders? Do you submit your customers to a polygraph prior to enabling their port? I am fairly certain that the number of hosts that will accept a customer who says "I want your middle grade package and, oh, by the way, I'm going to be SPAMMING the planet to promote my site!" is very limited. The potential lawsuits against the hosting company for loss of business are just more reason to use an individual IP per domain. --- John Fraizer EnterZone, Inc
On Tue, 29 Aug 2000 multics@ruserved.com wrote:
Name-based virtual hosting does not work in many, MANY cases. Beyond this, if you have multiple customers on a single IP address and one of them is an idiot and spamvertizes their website, several providers have a policy of nullrouting the /32. Now, not only does it kill a single site but, potentially hundreds!
--- John Fraizer EnterZone, Inc
That might be an advantage in fighting spam. Think of the potential lawsuits against the hosting company and the spammer for loss of business from the spam block. The hosting company should not have accepted a spammer or as soon as they are known to be a spammer, they must take action to prevent loss of business to the non-spammer customers ;)
You've obviously not been in this situation before. Just how are you to know what a customer is going to do prior to turning them on? Do you have access to mindreaders? Do you submit your customers to a polygraph prior to enabling their port?
Yes I have. I accepted golfballs.net as a customer after confirming that it was a new owner and specifically telling them that the wite would go poof at the first sign of spam. goldballs.net had previously been a pretty famous spammer. Also note I said "or as soon as they are known to be a spammer" since you can't tell what every customer is going to do in advance since if they are a spammer they are likely to lie to you about spamming to begin with.
I am fairly certain that the number of hosts that will accept a customer who says "I want your middle grade package and, oh, by the way, I'm going to be SPAMMING the planet to promote my site!" is very limited.
I would hope so.
The potential lawsuits against the hosting company for loss of business are just more reason to use an individual IP per domain.
Yup, that's my point.
--- John Fraizer EnterZone, Inc
-- Richard Shetron multics@ruserved.com multics@acm.rpi.edu NO UCE What is the Meaning of Life? There is no meaning, It's just a consequence of complex carbon based chemistry; don't worry about it The Super 76, "Free Aspirin and Tender Sympathy", Las Vegas Strip.
On Tue, 29 Aug 2000, John Fraizer wrote:
Name-based virtual hosting does not work in many, MANY cases. Beyond this, if you have multiple customers on a single IP address and one of them is an idiot and spamvertizes their website, several providers have a policy of nullrouting the /32. Now, not only does it kill a single site but, potentially hundreds!
To be fair, that seems like much better argument against nullrouting website /32s in retaliation for spam than against using the same IP address for multiple websites where feasable. There are plenty of things people have done in retaliation for spam, including blocking /32s, whole /24s, whole CIDR blocks, or whole ASNs. With any of these approaches there are potential problems with inadvertently blocking things that shouldn't be blocked, and anybody doing such blocking should think about that and figure out whether the blocking is really something they want to do. If an ISP intentionally or inadvertently blackholes something their customers think they're paying for access to, that really sounds like a problem with the ISP doing the blocking, that the ISP can easily fix if they want to (and that can cause their customers to go elsewhere if they don't), rather than a problem at ARIN or at the web hosting company. That's not to say that this ARIN policy doesn't have other problems, but many of them have already been well covered in this discussion. -Steve -------------------------------------------------------------------------------- Steve Gibbard scg@gibbard.org
ARIN's new web hosting policy has recently been under discussion on the ARIN IP allocations policy mailing list. See http://www.arin.net/members/mailing.htm.
The policy is described at
http://www.arin.net/announcements/policy_changes.html
Some individuals have expressed their disagreement with this new policy. Should the ARIN web hosting policy be changed?
while I can be accused of being ignorant about ARIN matters, and this first came to my attention through the NANOG list, I have to say that this policy that (and many seem to miss that) HAS ALREADY BEEN IMPLEMENTED unduly restricts hosting companies from doing proper business: - SSL will not go away anytime soon. Billions have been spent on software to run it, certificates generated, signed and issued, expertise gained and management software written. How can ARIN ignore this existing investment and expertise and how dare it try to throw it out with the bathwater from literally one day to another? - How does ARIN think people are doing QoS for such large assemblies of hosted websites, or billing? Answer: by IP number with more simpler L-3 QoS enforcers such as Cisco CAR/GTS, Linux and BSDI-based rate-shaping. Why should webhosters start shelling out big bucks for Layer 5-7 intelligent switches in a market that is already seeing the $5/mo website? IP numbers are cheap, Layer 4-7 switches are disproportionally expensive. In a time when shortage of IP space should theoretically no longer be an issue, IPv6 allocation guidelines are pretty much ensuring that only the biggest players with the most engineering resources actually have a shot at IPv6: how do you suppose IPv6 expertise will be built on a large scale, when 1/2 to fully 3/4's of network engineers, website managers, DNS managers are working in places that are too small to get their own allocation of IPv6 IP space? There are lots of places, probably using the majority of all IP space, where the business model and economics of scale just make no sense to become ARIN member to get IP space, and instead IP space continues to be received as PA space from providers who have no real incentive or immediate plans to provide IPv6 service, with the incentive of the customers ASKING for IPv6 service never happening for that reason ? This is a giant distortion of the market, that will ultimately prevent IPv6 from ever getting deployed on a large scale, or only when it's too late. Already, people are too fixated on securing IPv4 space as property, with ARIN hopelessly trying to stem the flood, when all these efforts are ultimately doomed: giving the boat up when it's half full of water in order to make the jump to a bigger boat is a good idea. Stop the bucket line. Abandon ship.
Kai, Just want to highlight something here:
IPv6 allocation guidelines are pretty much ensuring that only the biggest players with the most engineering resources actually have a shot at IPv6
Presumably, you're talking about the ARIN IPv6 allocation guidelines found at http://www.arin.net/regserv/ipv6/ipv6guidelines.html. IPv6 uses CIDR. CIDR means provider based aggregation. Provider based aggregation means that the vast majority of allocations MUST be made by "transit service providers". In my experience, webhost providers are generally not considered transit providers. Webhosting providers (and other non-"transit" sites) should contact their upstream IPv6 (tunnel or otherwise) provider(s) for IPv6 address space. They should NOT be obtaining space from ARIN or other RIRs. Anything else will simply recreate the swamp. Rgds, -drc (speaking only for myself)
Quoth David R. Conrad (David.Conrad@nominum.com):
"transit service providers". In my experience, webhost providers are generally not considered transit providers. Webhosting providers (and other non-"transit" sites) should contact their upstream IPv6 (tunnel or otherwise) provider(s) for IPv6 address space. They should NOT be obtaining space from ARIN or other RIRs. Anything else will simply recreate the swamp.
I think, like everything, this depends on circumstance. For example, we are more or less a content provider. However, we do not source from a single location. With 15 POPs spread throughout the US and Europe, and more on the way, with exceedingly non-contiguous space obtained from 6 different upstreams, it would benefit ourselves as well as our many providers to have our own PI space. Of course, convincing ARIN of this is proving most difficult :) --ted -- Ted Beatie ted@mirror-image.net Sr. Network Engineer +1-781-376-1108 Mirror Image Internet 49 Dragon Court Woburn, MA 01801 http://www.mirror-image.com
With 15 POPs spread throughout the US and Europe, and more on the way, with exceedingly non-contiguous space obtained from 6 different upstreams, it would benefit ourselves as well as our many providers to have our own PI space.
I suspect the number of organizations who can claim "it would benefit ourselves as well as our many providers" will greatly exceed the number of available routing slots before IPv6 comes anywhere close to being significantly deployed. Rgds, -drc
Sorry for the first post david. Hit the wrong key. --- Jason Slagle - CCNA - CCDA Network Administrator - Toledo Internet Access - Toledo Ohio - raistlin@tacorp.net - jslagle@toledolink.com - WHOIS JS10172 -----BEGIN GEEK CODE BLOCK----- Version: 3.12 GE d-- s:+ a-- C++ UL+++ P--- L+++ E- W- N+ o-- K- w--- O M- V PS+ PE+++ Y+ PGP t+ 5 X+ R tv+ b+ DI+ D G e+ h! r++ y+ ------END GEEK CODE BLOCK------ On Thu, 31 Aug 2000, David R. Conrad wrote:
With 15 POPs spread throughout the US and Europe, and more on the way, with exceedingly non-contiguous space obtained from 6 different upstreams, it would benefit ourselves as well as our many providers to have our own PI space.
I suspect the number of organizations who can claim "it would benefit ourselves as well as our many providers" will greatly exceed the number of available routing slots before IPv6 comes anywhere close to being significantly deployed.
From everything I've read, an originization cannot announce another originizations address blocks to anyone else under ipv6. It's forbidden. This would make getting a top level block a REQUIREMENT to multihome unless you took multiple blocks from multiple providers, but
that gets messy. Jason
Jason,
From everything I've read, an originization cannot announce another originizations address blocks to anyone else under ipv6.
Well, yeah. This would probably be considered theft. However, in order for the Internet to scale (IPv4 or IPv6), address aggregates must be announced -- that is, when an ISP delegates a block of address space to a customer, the ISP should not announce that address space separate from the block the customer's address space came out of. This is where the question of address ownership (o address leasing if you prefer) gets involved.
This would make getting a top level block a REQUIREMENT to multihome unless you took multiple blocks from multiple providers, but that gets messy.
I believe this is the intended strategy for multi-homing in IPv6. Rgds, -drc
On Thu, 31 Aug 2000, David R. Conrad wrote:
With 15 POPs spread throughout the US and Europe, and more on the way, with exceedingly non-contiguous space obtained from 6 different upstreams, it would benefit ourselves as well as our many providers to have our own PI space.
I suspect the number of organizations who can claim "it would benefit ourselves as well as our many providers" will greatly exceed the number of available routing slots before IPv6 comes anywhere close to being significantly deployed.
Gosh, doesn't that beg the question: why don't we require the return of all that wasted space that was delegated to anyone and everyone so long ago?
Patrick Greenwell wrote:
Gosh, doesn't that beg the question: why don't we require the return of all that wasted space that was delegated to anyone and everyone so long ago?
YES...please do. Keep it mind, it will be simply an attempted land grab, though. I WANT to give back my blocks (there is NO WAY I could justify the giveback to management, though...it's real-estate), but the land grab would prevent it due to the ARIN policies. 27 legacy /16s are completely ill-proportioned to my organization...but the notion of prior-property makes them think they "own it." Make those folks with /8s and /16s justify it just like us startups have to. OR, just read M. Ohta's work and get the allocations overwith once and for all to foster the transition. -Nathan Lane
On Thu, 31 Aug 2000, Patrick Greenwell wrote:
On Thu, 31 Aug 2000, David R. Conrad wrote:
With 15 POPs spread throughout the US and Europe, and more on the way, with exceedingly non-contiguous space obtained from 6 different upstreams, it would benefit ourselves as well as our many providers to have our own PI space.
I suspect the number of organizations who can claim "it would benefit ourselves as well as our many providers" will greatly exceed the number of available routing slots before IPv6 comes anywhere close to being significantly deployed.
Gosh, doesn't that beg the question: why don't we require the return of all that wasted space that was delegated to anyone and everyone so long ago?
No joke. It used to be you could get address space just by knowing how to ask for it. "Um, ya. You got a /8 laying around I can have? Oh, sure. I'll make efficient use of it... Ya. That's it. That's the ticket!" Now, even after demonstrating efficient use of IP space, it's a pain in the butt. Then again, back then our pings traveled 10,000 miles, up hill both ways, in 6ft of snow, barefoot, on their way to school and back. John Fraizer EnterZone, Inc
Patrick Greenwell wrote:
Gosh, doesn't that beg the question: why don't we require the return of all that wasted space that was delegated to anyone and everyone so long ago?
Ah, but if you return your legacy blocks, you're no longer a member of ARIN. Apparently those holding blocks from the old days are automatically made members, so there's a financial incentive from ARIN itself to avoid return of any historic delegation. -- ----------------------------------------------------------------- Daniel Senie dts@senie.com Amaranth Networks Inc. http://www.amaranth.com
----- Original Message ----- From: Daniel Senie <dts@senie.com> To: Patrick Greenwell <patrick@cybernothing.org> Cc: David R. Conrad <David.Conrad@nominum.com>; Ted Beatie <ted@mirror-image.net>; <nanog@merit.edu> Sent: Friday, September 01, 2000 8:18 AM Subject: Re: IPv6 allocatin (was Re: ARIN Policy on IP-based Web Hosting)
Patrick Greenwell wrote:
Gosh, doesn't that beg the question: why don't we require the return of all that wasted space that was delegated to anyone and everyone so long ago?
Ah, but if you return your legacy blocks, you're no longer a member of ARIN. Apparently those holding blocks from the old days are automatically made members, so there's a financial incentive from ARIN itself to avoid return of any historic delegation.
Um...no, that's not the policy. Only subscription holders are automatic members (meaning membership is included in the annual maintenance fee of ISPs). Kim
-- ----------------------------------------------------------------- Daniel Senie dts@senie.com Amaranth Networks Inc. http://www.amaranth.com
Gosh, doesn't that beg the question: why don't we require the return of all that wasted space that was delegated to anyone and everyone so long ago?
Discuss whether (say) the MIT /8 is wasted with the appropriate individuals and I'm sure it'll be an interesting discussion. ARIN's legal defense fund is limited. I doubt it would be able to withstand an attempt to require the return of historically allocated address space. Rgds, -drc
Dave;
I suspect the number of organizations who can claim "it would benefit ourselves as well as our many providers" will greatly exceed the number of available routing slots before IPv6 comes anywhere close to being significantly deployed.
When, do you think, will IPv6 come someywhere close to being significantly deployed? Masataka Ohta
When, do you think, will IPv6 come someywhere close to being significantly deployed?
When IPv6 offers something end users or ISPs value over IPv4+NAT.
Hah, interesting thought! :-) Perhaps we should push for people to use the IPv4+NAT kludge, which may turn around and fuel IPv6 deployment as people discover the crippled functionality with widespread deployment. Two seperately NAT'ed hosts don't communicate enough today for more than a "priviledge few" to discover all the issues with it. Then again, I don't believe pain is the best way to induce change, but it does seem to work remarkably efficient at times. -- Christian Kuhtz, Sr. Network Architect Architecture, BellSouth.net <ck@arch.bellsouth.net> -wk, <ck@gnu.org> -hm Atlanta, GA "Speaking for myself only."
When IPv6 offers something end users or ISPs value over IPv4+NAT. Hah, interesting thought! :-) Perhaps we should push for people to use the IPv4+NAT kludge,
A very large number of folks are using IPv4+NAT now, mostly without complaint (or even awareness). I use it myself quite frequently. In a limited (albeit typical) mode of communication, NAT works just fine. IPv6, on the other hand, does NOT work for the vast majority of people on the Internet.
Two seperately NAT'ed hosts don't communicate enough today for more than a "priviledge few" to discover all the issues with it.
You are expecting people who have been trained to expect to have to reboot periodically and/or reinstall entire operating systems when they have problems to get annoyed when they have difficulties trying to communicate to a remote site because of NAT? If/when folks run into difficulties due to NAT, I suspect they'll just shrug their shoulders, mumble something about broken web sites, and fine another porn^H^H^H^Hcontent site that works. Rgds, -drc
When IPv6 offers something end users or ISPs value over IPv4+NAT. Hah, interesting thought! :-) Perhaps we should push for people to use the IPv4+NAT kludge,
A very large number of folks are using IPv4+NAT now, mostly without complaint (or even awareness). I use it myself quite frequently. In a limited (albeit typical) mode of communication, NAT works just fine. IPv6, on the other hand, does NOT work for the vast majority of people on the Internet.
The point was a NAT'ed (masqueraded) network attempting to communicate with another NAT'ed (masqueraded) network. That does NOT work for the vast majority of people on the Internet. And it's a conceptual flaw, rather than a lack of technology penetration (IPv6). -- Christian Kuhtz, Sr. Network Architect Architecture, BellSouth.net <ck@arch.bellsouth.net> -wk, <ck@gnu.org> -hm Atlanta, GA "Speaking for myself only."
Christian,
The point was a NAT'ed (masqueraded) network attempting to communicate with another NAT'ed (masqueraded) network. That does NOT work for the vast majority of people on the Internet.
Hmmm. If you never try something, can it be said to not work? Until such a scenario becomes _far_ more commonplace that it is today, I doubt anyone (other than end-to-end purists and the folks who have been bitten) will care. Rgds, -drc
David;
When, do you think, will IPv6 come someywhere close to being significantly deployed?
When IPv6 offers something end users
It does not work. IETF is making IPv6 more and more complex and less and less useful. Applications do not help, either. If some valuable new application is offered on IPv6, it will soon be available also on IPv4 with a lot more servers and end users.
or ISPs value over IPv4+NAT.
ISPs, by definition, do not use NAT. But, current policy of usage based allocation motivates ISPs to be NAT-using-non-ISPs rather than IPv6-ISPs. That's how north wind approach works. Worse, ISPs trying to offer IPv6 service must also offer 6-to-4 NAT, because most contents are in IPv4 world. Masataka Ohta
With 15 POPs spread throughout the US and Europe, and more on the way, with exceedingly non-contiguous space obtained from 6 different upstreams, it would benefit ourselves as well as our many providers to have our own PI space.
I suspect the number of organizations who can claim "it would benefit ourselves as well as our many providers" will greatly exceed the number of available routing slots before IPv6 comes anywhere close to being significantly deployed.
So, how do we bring the operations reality and ivory tower approaches closer together? Elegant complexity causes operational nightmares all of its own. -- Christian Kuhtz, Sr. Network Architect Architecture, BellSouth.net <ck@arch.bellsouth.net> -wk, <ck@gnu.org> -hm Atlanta, GA "Speaking for myself only."
participants (37)
-
Alex Kamantauskas
-
Andrew Brown
-
Andrew McNamara
-
Bennett Todd
-
Bill Fumerola
-
Christian Kuhtz
-
Daniel Senie
-
David Charlap
-
David R. Conrad
-
Derek J. Balling
-
Edward S. Marshall
-
Jason Slagle
-
Jeff Mcadams
-
Jim Mercer
-
jlewis@lewis.org
-
Joe Shaw
-
John Fraizer
-
Joseph McDonald
-
Kai Schlichting
-
Kim Hubbard
-
Masataka Ohta
-
Mr. James W. Laferriere
-
multics@ruserved.com
-
Nathan Lane
-
Patrick Evans
-
Patrick Greenwell
-
Peter van Dijk
-
Roland Dobbins
-
Ron 'The InSaNe One' Rosson
-
Sabri Berisha
-
Shawn McMahon
-
sigma@pair.com
-
Steve Gibbard
-
Steve Sobol
-
Ted Beatie
-
Thomas Marshall Eubanks
-
woods@weird.com