RE: Important New Requirement for IPv4 Requests
From: Frank Bulk - iName.com [mailto:frnkblk@iname.com]
It appears that ARIN wants to raise the IP addressing space issue to the CxO level -- if it was interested in honesty, ARIN would have required a notarized statement by the person submitting the request. If ARIN really wants to get the interest of CEOs, raise the price!
Raising the price won't help; there's already a huge amount of wasted address space by web hosts selling IP addresses to customers who need them solely for 'seo purposes' rather than allocating them in return for a reasonable technical justification. If ARIN raises the prices, it will hurt hosts who allocate their space in a responsible manner and those who don't will just charge more for the right to have one of these seo-friendly exclusive IP's that webmasters so righteously believe will make their sites #1 on google. We regularly lose business thanks to something that goes a little like this: "Can I get a block of 100 IP's for no particular reason?", no, "My old host let me, I just had to pay $100/month for it." One of Google's seo spam team members actually blogged on this topic after a nanog post I made about this a few years back, and I still send it to people asking for IP's for seo reasons and even then they don't believe me. If ARIN would enforce a technically justified use of IPv4 space that does not recognize "seo" as a valid reason, that would definitely help, otherwise web hosts will keep selling IP space to their customers at prices that let them keep buying more. And since the policy allows it currently, the CEO signing off on it will also be valid. David
On Apr 21, 2009, at 1:58 PM, David Hubbard wrote:
Raising the price won't help; there's already a huge amount of wasted address space by web hosts selling IP addresses to customers who need them solely for 'seo purposes' rather
It's a common request we see. We refuse it, and point them to the Google documentation that shows that unique IPs don't help or hurt their SEO standings.
reasons and even then they don't believe me. If ARIN would enforce a technically justified use of IPv4 space that does not recognize "seo" as a valid reason, that would definitely help
I point to the wording where it says that we need to collect the technical justification for the additional IP addresses. Since virtual web hosting has no technical justification for IP space, I refuse it.
And since the policy allows it currently, the CEO signing off on it will also be valid.
Depends on how you read the policy. I prefer my reading to yours ;-) That said, if someone who likes writing these things will help me, I'll gladly create and advance a policy demanding a real, provable need for an IP beyond one per physical host. -- Jo Rhett Net Consonance : consonant endings by net philanthropy, open source and other randomness
Once upon a time, Jo Rhett <jrhett@netconsonance.com> said:
Since virtual web hosting has no technical justification for IP space, I refuse it.
SSL and FTP are techincal justifications for an IP per site. -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
Chris Adams wrote:
Once upon a time, Jo Rhett <jrhett@netconsonance.com> said:
Since virtual web hosting has no technical justification for IP space, I refuse it.
SSL and FTP are techincal justifications for an IP per site.
Right. Also, monthly bandwidth monitoring/shaping/capping are more easily done using one ip per hosted domain, or ftp site, or whatever. Otherwise you are parsing logs or using 3rd party apache modules. It's a convenience which would not be looked at twice, if it were on ipv6. All the more reason to move to ipv6. :-) Ken -- Ken Anderson Pacific Internet - http://www.pacific.net
On Apr 21, 2009, at 4:22 PM, Ken A wrote:
Chris Adams wrote:
Once upon a time, Jo Rhett <jrhett@netconsonance.com> said:
Since virtual web hosting has no technical justification for IP space, I refuse it. SSL and FTP are techincal justifications for an IP per site.
Right. Also, monthly bandwidth monitoring/shaping/capping are more easily done using one ip per hosted domain, or ftp site, or whatever. Otherwise you are parsing logs or using 3rd party apache modules.
*Shrug* I've been doing IP allocations for 14 years and that's never been mentioned to me. I suspect that anyone with enough traffic to need traffic shaping has dedicated hosts or virtual servers, which get a unique IP each. -- Jo Rhett Net Consonance : consonant endings by net philanthropy, open source and other randomness
On Tue, Apr 21, 2009 at 04:41:46PM -0700, Jo Rhett wrote:
On Apr 21, 2009, at 4:22 PM, Ken A wrote:
Chris Adams wrote:
Once upon a time, Jo Rhett <jrhett@netconsonance.com> said:
Since virtual web hosting has no technical justification for IP space, I refuse it. SSL and FTP are techincal justifications for an IP per site.
Right. Also, monthly bandwidth monitoring/shaping/capping are more easily done using one ip per hosted domain, or ftp site, or whatever. Otherwise you are parsing logs or using 3rd party apache modules.
*Shrug* I've been doing IP allocations for 14 years and that's never been mentioned to me.
Oh, you lucky, lucky person. We've got a couple of customers at the day job that constantly come back to us for more IP addresses for bandwidth accounting purposes for their colo machine(s). Attempts at education are like talking to a particularly stupid brick wall. - Matt
On Apr 21, 2009, at 5:23 PM, Matthew Palmer wrote:
Oh, you lucky, lucky person. We've got a couple of customers at the day job that constantly come back to us for more IP addresses for bandwidth accounting purposes for their colo machine(s). Attempts at education are like talking to a particularly stupid brick wall.
And not very effective either, because anything they do to solve the problem another way will likely create the valid need for an external IP. These days, virtual hosting is all virtual machines, so the IP justification is just there anyway. -- Jo Rhett Net Consonance : consonant endings by net philanthropy, open source and other randomness
On Tue, 21 Apr 2009 19:22:08 -0400, Ken A <ka@pacific.net> wrote:
Also, monthly bandwidth monitoring/shaping/capping are more easily done using one ip per hosted domain...
That's why the infrastructure is "virtualized" and you monitor at or behind the firewall(s) and/or load balancer(s) -- where it *is* one IP per customer. Sure, it's easier (and cheaper) to be lazy and waste address space than setup a proper hosting network.
Ricky Beam wrote:
On Tue, 21 Apr 2009 19:22:08 -0400, Ken A <ka@pacific.net> wrote:
Also, monthly bandwidth monitoring/shaping/capping are more easily done using one ip per hosted domain...
That's why the infrastructure is "virtualized" and you monitor at or behind the firewall(s) and/or load balancer(s) -- where it *is* one IP per customer. Sure, it's easier (and cheaper) to be lazy and waste address space than setup a proper hosting network.
I wasn't trying to point towards the 'right way', only adding to the list of motivations that are out there, and being discussed here. As ipv4 gets less cheap, and less easy to obtain, these motivations cease. That's a good thing. Ken -- Ken Anderson Pacific Internet - http://www.pacific.net
On Apr 21, 2009, at 3:40 PM, Chris Adams wrote:
Once upon a time, Jo Rhett <jrhett@netconsonance.com> said:
Since virtual web hosting has no technical justification for IP space, I refuse it.
SSL and FTP are techincal justifications for an IP per site.
Absolutely. But SEO on pure virtual sites is not ;-) -- Jo Rhett Net Consonance : consonant endings by net philanthropy, open source and other randomness
On Tue, 21 Apr 2009 18:40:30 -0400, Chris Adams <cmadams@hiwaay.net> wrote:
SSL and FTP are techincal justifications for an IP per site.
No they aren't. SSL will work just fine as a name-based virtual host with any modern webserver / browser. (Server Name Indication (SNI) [RFC3546, sec 3.1]) FTP? Who uses FTP these days? Certainly not consumers. Even Cisco pushes almost everything via a webserver. (they still have ftp servers, they just don't put much on them these days.)
On Tue, Apr 21, 2009 at 08:24:38PM -0400, Ricky Beam wrote:
On Tue, 21 Apr 2009 18:40:30 -0400, Chris Adams <cmadams@hiwaay.net> wrote:
SSL and FTP are techincal justifications for an IP per site.
No they aren't. SSL will work just fine as a name-based virtual host with any modern webserver / browser. (Server Name Indication (SNI) [RFC3546, sec 3.1])
"I encourage my competitors to do this." You only have to get one noisy curmudgeon who can't get to your customer's SSL website because IE 5.0 has worked fine for them for years to make it a completely losing strategy to try deploying this everywhere. Since you can't predict in advance which sites are going to be accessed by said noisy curmudgeon, you don't bother deploying it anywhere, to be on the safe side.
FTP? Who uses FTP these days? Certainly not consumers. Even Cisco pushes almost everything via a webserver. (they still have ftp servers, they just don't put much on them these days.)
A depressingly large number of people use FTP. Attempts to move them onto something less insane are fruitless. Even when the tools support it (and plenty of "web design" tools don't appear to do anything other than FTP), "we've always done it that way and it works fine and if we have to change something we'll move to another hosting company rather than click a different button in our program". Business imperatives trump technical considerations, once again. And, for the record, we're moving toward IPv6, so we're *trying* to be part of the solution, in our own small way. - Matt
On Tue, 21 Apr 2009 20:57:31 -0400, Matthew Palmer <mpalmer@hezmatt.org> wrote:
FTP? Who uses FTP these days? ... A depressingly large number of people use FTP. Attempts to move them onto something less insane are fruitless. Even when the tools support it (and plenty of "web design" tools don't appear to do anything other than FTP), "we've always done it that way and it works fine and if we have to change something we'll move to another hosting company rather than click a different button in our program".
On Tue, 21 Apr 2009 21:07:08 -0400, Daniel Senie <dts@senie.com> wrote:
You are out of touch. FTP is used by nearly EVERY web hosting provider for updates of web sites. Anonymous FTP is not used.
These are not random, anonymous ftp connections. These are people who login with a username and password, and are therefore, identifiable; and even then, it's for access to manage their own site. A single IP address pointing to a single server (or farm of servers) will, and DOES, work just fine. I know, because I've done it for ~15 years. When I ask "who", I'm asking about a paid for, external service -- just like web hosting. No one calls up 1-800-Host-My-Crap and asks for "an FTP server". Bottom line... if your justification for a /19 is "FTP servers", you are fully justified in laughing at them as you hang up the phone.
On Wed, Apr 22, 2009 at 10:57:31AM +1000, Matthew Palmer wrote:
On Tue, Apr 21, 2009 at 08:24:38PM -0400, Ricky Beam wrote:
On Tue, 21 Apr 2009 18:40:30 -0400, Chris Adams <cmadams@hiwaay.net> wrote:
SSL and FTP are techincal justifications for an IP per site.
No they aren't. SSL will work just fine as a name-based virtual host with any modern webserver / browser. (Server Name Indication (SNI) [RFC3546, sec 3.1])
"I encourage my competitors to do this." You only have to get one noisy curmudgeon who can't get to your customer's SSL website because IE 5.0 has worked fine for them for years to make it a completely losing strategy to try deploying this everywhere. Since you can't predict in advance which sites are going to be accessed by said noisy curmudgeon, you don't bother deploying it anywhere, to be on the safe side.
The switch to "HTTP requests include a hostname" had the same problem, but still did occur; it may take a few years, but doable. Probably too late to save IPv4 addresses; though. By then (I really, really, hope) IPv6 will be mainstream. -- Lionel
Once upon a time, Ricky Beam <jfbeam@gmail.com> said:
On Tue, 21 Apr 2009 18:40:30 -0400, Chris Adams <cmadams@hiwaay.net> wrote:
SSL and FTP are techincal justifications for an IP per site.
No they aren't. SSL will work just fine as a name-based virtual host with any modern webserver / browser. (Server Name Indication (SNI) [RFC3546, sec 3.1])
What is your definition of "modern"? According to Wikipedia <http://en.wikipedia.org/wiki/Server_Name_Indication>: Unsupported Operating Systems and Browsers The following combinations do not support SNI. * Windows XP and Internet Explorer 6 or 7 * Konqueror/KDE in any version * Apache with mod_ssl: there is a patch under review by httpd team for inclusion in future releases, after 2.2.11. See doco at [1] * Microsoft Internet Information Server IIS (As of 2007). Seeing as WinXP/IE is still the most common combination, SNI is a long time away from being useful. -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
On Tue, Apr 21, 2009 at 08:24:38PM -0400, Ricky Beam wrote:
On Tue, 21 Apr 2009 18:40:30 -0400, Chris Adams <cmadams@hiwaay.net> wrote:
SSL and FTP are techincal justifications for an IP per site.
No they aren't. SSL will work just fine as a name-based virtual host with any modern webserver / browser. (Server Name Indication (SNI) [RFC3546, sec 3.1])
FTP? Who uses FTP these days? Certainly not consumers. Even Cisco pushes almost everything via a webserver. (they still have ftp servers, they just don't put much on them these days.)
well, pretty much anyone who has large datasets to move around. that default 64k buffer in the openssl libs pretty much sucks rocks for large data flows. --bill
On 21-Apr-2009, at 21:50, bmanning@vacation.karoshi.com wrote:
On Tue, Apr 21, 2009 at 08:24:38PM -0400, Ricky Beam wrote:
FTP? Who uses FTP these days? Certainly not consumers. Even Cisco pushes almost everything via a webserver. (they still have ftp servers, they just don't put much on them these days.)
well, pretty much anyone who has large datasets to move around. that default 64k buffer in the openssl libs pretty much sucks rocks for large data flows.
So you're saying FTP with no SSL is better than HTTP with no SSL? Joe
On Wed, Apr 22, 2009 at 10:17:38AM -0400, Joe Abley wrote:
On 21-Apr-2009, at 21:50, bmanning@vacation.karoshi.com wrote:
On Tue, Apr 21, 2009 at 08:24:38PM -0400, Ricky Beam wrote:
FTP? Who uses FTP these days? Certainly not consumers. Even Cisco pushes almost everything via a webserver. (they still have ftp servers, they just don't put much on them these days.)
well, pretty much anyone who has large datasets to move around. that default 64k buffer in the openssl libs pretty much sucks rocks for large data flows.
So you're saying FTP with no SSL is better than HTTP with no SSL?
Joe
(see me LEAPING to conclusions....) yes. (although I was actually thinking http w/ SSL vs FTP w/o SSL) a really good review of the options was presented at the DoE/JT meeting at UNL last summer. Basically, tuned FTP w/ large window support is still king for pushing large datasets around. --bill
On Wed, Apr 22, 2009 at 02:27:14PM +0000, bmanning@vacation.karoshi.com wrote:
On Wed, Apr 22, 2009 at 10:17:38AM -0400, Joe Abley wrote:
On 21-Apr-2009, at 21:50, bmanning@vacation.karoshi.com wrote:
On Tue, Apr 21, 2009 at 08:24:38PM -0400, Ricky Beam wrote:
FTP? Who uses FTP these days? Certainly not consumers. Even Cisco pushes almost everything via a webserver. (they still have ftp servers, they just don't put much on them these days.)
well, pretty much anyone who has large datasets to move around. that default 64k buffer in the openssl libs pretty much sucks rocks for large data flows.
So you're saying FTP with no SSL is better than HTTP with no SSL?
Joe
(see me LEAPING to conclusions....)
yes. (although I was actually thinking http w/ SSL vs FTP w/o SSL) a really good review of the options was presented at the DoE/JT meeting at UNL last summer. Basically, tuned FTP w/ large window support is still king for pushing large datasets around.
--bill
whiner Joe... here's the link: http://www.internet2.edu/presentations/jt2008jul/20080720-tierney.pdf --bill
On Wed, Apr 22, 2009 at 10:17:38AM -0400, Joe Abley wrote:
On 21-Apr-2009, at 21:50, bmanning@vacation.karoshi.com wrote:
On Tue, Apr 21, 2009 at 08:24:38PM -0400, Ricky Beam wrote:
FTP? Who uses FTP these days? Certainly not consumers. Even Cisco pushes almost everything via a webserver. (they still have ftp servers, they just don't put much on them these days.)
well, pretty much anyone who has large datasets to move around. that default 64k buffer in the openssl libs pretty much sucks rocks for large data flows.
So you're saying FTP with no SSL is better than HTTP with no SSL?
(see me LEAPING to conclusions....)
yes. (although I was actually thinking http w/ SSL vs FTP w/o SSL) a really good review of the options was presented at the DoE/JT meeting at UNL last summer. Basically, tuned FTP w/ large window support is still king for pushing large datasets around.
Why not just put it all in an e-mail attachment. Geez. Everyone knows that's a great idea. While HTTP remains popular as a way to interact with humans, especially if you want to try to do redirects, acknowledge license agreements, etc., FTP is the file transfer protocol of choice for basic file transfer, and can be trivially automated, optimized, and is overall a good choice for file transfer. Does anyone know what "FTP" stands for, anyways? I've always wondered... ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Wed, 2009-04-22 at 09:42 -0500, Joe Greco wrote:
FTP is the file transfer protocol of choice for basic file transfer, [...] Does anyone know what "FTP" stands for, anyways? I've always wondered...
File Transfer Protocol. I know - it's a tricky one that, don't feel bad :-) Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) +61-2-64957160 (h) http://www.biplane.com.au/~kauer/ +61-428-957160 (mob) GPG fingerprint: 07F3 1DF9 9D45 8BCD 7DD5 00CE 4A44 6A03 F43A 7DEF
On 22 Apr 2009, at 10:42, Joe Greco wrote:
While HTTP remains popular as a way to interact with humans, especially if you want to try to do redirects, acknowledge license agreements, etc., FTP is the file transfer protocol of choice for basic file transfer, and can be trivially automated, optimized, and is overall a good choice for file transfer.
Does anyone know what "FTP" stands for, anyways? I've always wondered...
:-) I was mainly poking at the fact that Bill seemed to be comparing SSL- wrapped file transfer with non-SSL-wrapped file transfer, but I'm intrigued by the idea that FTP without SSL might be faster than HTTP without SSL, since in my mind outside the minimal amount of signalling involved they both amount to little more than a single TCP stream. Bill sent me a link to a paper. I will read it. However, I take some small issue with the assertion that FTP is easier to script than HTTP. The only way I have ever found it easy to script FTP (outside of writing dedicated expect scripts to drive clients, which really seems like cheating) is to use tools like curl, and I don't see why HTTP is more difficult than FTP as a protocol in that case. Perhaps I'm missing something. Joe
On 23/04/2009, at 3:33 AM, Joe Abley wrote:
However, I take some small issue with the assertion that FTP is easier to script than HTTP. The only way I have ever found it easy to script FTP (outside of writing dedicated expect scripts to drive clients, which really seems like cheating) is to use tools like curl, and I don't see why HTTP is more difficult than FTP as a protocol in that case. Perhaps I'm missing something.
It looks like curl can upload stuff (-d @file) but you have to have something on the server to accept it. FTP sounds easier. -- Nathan Ward
On Apr 22, 2009, at 7:42 AM, Joe Greco wrote:
While HTTP remains popular as a way to interact with humans, especially if you want to try to do redirects, acknowledge license agreements, etc., FTP is the file transfer protocol of choice for basic file transfer
Speak for yourself. I haven't used FTP to transfer files in 10 years now. About 7 years ago I turned off FTP support for all of our webhosting clients, and forced them to use SFTP. 3 left, for a net loss of $45/month. And we stopped having to deal with the massive undertaking that supporting FTP properly chrooted and capable of dealing with all parts of the multi-mount web platform required. We've never looked back. Ever once in a while I find someone who's offering a file I want only via FTP, and I chide them and they fix it ;-) -- Jo Rhett Net Consonance : consonant endings by net philanthropy, open source and other randomness
On Apr 21, 2009, at 6:50 PM, bmanning@vacation.karoshi.com wrote:
FTP? Who uses FTP these days? Certainly not consumers. Even Cisco
well, pretty much anyone who has large datasets to move around. that default 64k buffer in the openssl libs pretty much sucks rocks for large data flows.
Large data sets? So you are saying that 512-byte packets with no windowing work better? Bill, have you measured this? Time to download a 100mb file over HTTP and a 100mb interface: 20 seconds. Time to download a 100mb file over FTP and a 100mb interface: ~7 minutes. And yes, that was FreeBSD with the old version openssl library that shipped with 6.3. -- Jo Rhett Net Consonance : consonant endings by net philanthropy, open source and other randomness
Large data sets? So you are saying that 512-byte packets with no windowing work better? Bill, have you measured this?
Time to download a 100mb file over HTTP and a 100mb interface: 20 seconds. Time to download a 100mb file over FTP and a 100mb interface: ~7 minutes.
And yes, that was FreeBSD with the old version openssl library that shipped with 6.3.
As someone who copies large network trace files around a bit, 100MB at 100mb, over what I presume is a local (low latency) link is barely a fair test. Many popular web servers choke on serving files >2GB or >4GB in size (Sigh). I'm in New Zealand. It's usually at least 150ms to anywhere, often 300ms, so I feel the pain of small window sizes in popular encryption programs very strongly. Transferring data over high speed research networks means receive windows of at least 2MB, usually more. When popular programs provide their own window of 64kB, things get very slow.
Date: Fri, 24 Apr 2009 19:05:26 +1200 From: Perry Lorier <perry@coders.net>
Large data sets? So you are saying that 512-byte packets with no windowing work better? Bill, have you measured this?
Time to download a 100mb file over HTTP and a 100mb interface: 20 seconds. Time to download a 100mb file over FTP and a 100mb interface: ~7 minutes.
And yes, that was FreeBSD with the old version openssl library that shipped with 6.3.
As someone who copies large network trace files around a bit, 100MB at 100mb, over what I presume is a local (low latency) link is barely a fair test. Many popular web servers choke on serving files >2GB or >4GB in size (Sigh). I'm in New Zealand. It's usually at least 150ms to anywhere, often 300ms, so I feel the pain of small window sizes in popular encryption programs very strongly. Transferring data over high speed research networks means receive windows of at least 2MB, usually more. When popular programs provide their own window of 64kB, things get very slow.
Very few people (including some on this list) have much idea of the difficulty in moving large volumes of data between continents, especially between the Pacific (China, NZ, Australia, Japan, ...) and either Europe or North America. Getting TCP bandwidth over about 1Gbps is very difficult. Getting over 5G is nearly impossible. I can get 5Gbps pretty reliably with tuned end systems over a 100 ms. RTT, but that drops to about 2G at 200 ms. A good web site to read a bout getting fast bulk data transfers is: http://fasterdata.es.net It is aimed at DOE and DOE related researchers, but the information is valid for anyone needing to move data on a Terabyte or greater scale over long distances. We move a LOT of data between our facilities at FermiLab in Chicago and Brookhaven in New York and CERN in Europe. A Terabyte is just the opener for that data. Also, if you see anything that needs improvement or correction, please let me know. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
A good web site to read a bout getting fast bulk data transfers is: http://fasterdata.es.net
indeed mtu clue is also useful. here on tokyo b-flets, and i would guess in many other ppoe environments, you need to tune or lose big-time. randy
Randy Bush <randy@psg.com> writes:
mtu clue is also useful. here on tokyo b-flets, and i would guess in many other ppoe environments, you need to tune or lose big-time.
But not difficult to beneficially MiM: in pf: scrub in on gre0 max-mss 1400 scrub out on gre0 max-mss 1400 in cisco-land: ip tcp adjust-mss 1400 i'm sure the linux folks can offer up something similar... -r
Default MSS for most linux is 0, which causes the kernel to calculate it as the interface MTU-40bytes. You can either change the MTU on the interface or more specifically use the 'ip route <ipblock> dev <interface> advmss <new mss>' command to update it on a per route basis. ~J -----Original Message----- From: Robert E. Seastrom [mailto:rs@seastrom.com] Sent: Thursday, April 30, 2009 7:12 AM To: Randy Bush Cc: nanog@nanog.org Subject: Re: Important New Requirement for IPv4 Requests Randy Bush <randy@psg.com> writes:
mtu clue is also useful. here on tokyo b-flets, and i would guess in many other ppoe environments, you need to tune or lose big-time.
But not difficult to beneficially MiM: in pf: scrub in on gre0 max-mss 1400 scrub out on gre0 max-mss 1400 in cisco-land: ip tcp adjust-mss 1400 i'm sure the linux folks can offer up something similar... -r
On Tue, 21 Apr 2009, Jo Rhett wrote:
It's a common request we see. We refuse it, and point them to the Google documentation that shows that unique IPs don't help or hurt their SEO standings.
Some "customers" have wised up and when providing IP justification, they don't mention SEO anymore. However, I've seen several requests in the past couple weeks from customers/prospective customers wanting /24's or larger subnets (or they're not buying/canceling service) where the justification provided was something ARIN would probably be ok with, but IMO was completely FoS. It's hard to tell sales "no" when the customer tells you exactly what they think you want to hear [for IP justification], but your gut tells you "this is BS". BTW, I admit I've paid little attention to the legacy vs ARIN members arguments, as I'm not a legacy space holder and my time is largely occupied by more pressing [to me] matters...but why do legacy holders get a free ride? If we look at what happened with domain registration (at least for com|net|org), back in the old days, you sent off an email to hostmaster@internic.net and you got your domain registered. There were no fees. Then Network Solutions took over and domain name registrations cost money. Existing domains were not grandfathered in and either you started paying a yearly fee for your domains or you lost them. Why didn't the same thing happen when Internic/IANA stopped directly handing out IPs and the RIRs took over that function? ---------------------------------------------------------------------- Jon Lewis | I route Senior Network Engineer | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
On Apr 21, 2009, at 4:55 PM, Jon Lewis wrote:
Some "customers" have wised up and when providing IP justification, they don't mention SEO anymore. However, I've seen several requests in the past couple weeks from customers/prospective customers wanting /24's or larger subnets (or they're not buying/canceling service) where the justification provided was something ARIN would probably be ok with, but IMO was completely FoS. It's hard to tell sales "no" when the customer tells you exactly what they think you want to hear [for IP justification], but your gut tells you "this is BS".
Then you have an obligation to investigate. It's in the NRPM ;-) For our part, it becomes really easy. When someone submits a request for 200 physical hosts and their profile says they are paying for 40 amps of power... yeah, it's easy to know they are lying ;-) It is a problem because some ISPs don't care and just give away IPs, so customers get annoyed with us when I ask for proper justification. Oh well ;-) -- Jo Rhett Net Consonance : consonant endings by net philanthropy, open source and other randomness
On Tue, Apr 21, 2009 at 02:51:11PM -0700, Jo Rhett wrote:
On Apr 21, 2009, at 1:58 PM, David Hubbard wrote:
Raising the price won't help; there's already a huge amount of wasted address space by web hosts selling IP addresses to customers who need them solely for 'seo purposes' rather
It's a common request we see. We refuse it, and point them to the Google documentation that shows that unique IPs don't help or hurt their SEO standings.
Then they come back with a request for IPs for SSL certificates, which is a valid technical justification. BTDT. People will find a way to do the stupid thing they want to do. - Matt
On Apr 21, 2009, at 5:20 PM, Matthew Palmer wrote:
Then they come back with a request for IPs for SSL certificates, which is a valid technical justification. BTDT. People will find a way to do the stupid thing they want to do.
Most of the stupid people don't, actually. That's the funny thing that surprises me -- just how obviously lame the justifications are, and how they are unable even with direct statements about how to justify the IP space to do so. My god, it's really not hard to build a valid justification for more space than you need -- seriously. But these people just can't pull it off. Likewise, every company with whom I've had to debate the topic has failed within 18 months, so the problem pervades the organization ;-) -- Jo Rhett Net Consonance : consonant endings by net philanthropy, open source and other randomness
participants (18)
-
bmanning@vacation.karoshi.com
-
Chris Adams
-
David Hubbard
-
Jo Rhett
-
Joe Abley
-
Joe Greco
-
Jon Lewis
-
Justin Horstman
-
Karl Auer
-
Ken A
-
Kevin Oberman
-
Lionel Elie Mamane
-
Matthew Palmer
-
Nathan Ward
-
Perry Lorier
-
Randy Bush
-
Ricky Beam
-
Robert E. Seastrom