Questions about anycasting setup
Hello everyone. I am working on creating a small anycasting based setup with 3-4 servers in US. Plan is to use this for DNS and later for CDN setups. I have few confusions in mind and was wondering if you guys here can put some light on them: 1. For anycasting does announcing a /24 from different ASNs (of different datacenters) makes sense or it will be an issue to have a block being announced from different ASNs and I should avoid and prefer having own router below datacenters network and eventually use one single ASN to announce the anycasting block? 2. We plan to use this anycasting based setup for DNS during initial few months. Assuming low traffic for DNS say ~10Mbps on average (on 100Mbps port) and transit from just single network (datacenter itself) - is this setup OK for simple software based BGP like Quagga or Bird? Certainly colocating routers will be slow & expensive. Does it offer any direct advantage in such simple setups? 3. IPv6! - I am looking at possibility of having support of IPv6 in anycast right from start. Can't really find a good prefix size for anycasting announcement. I can see Hurricane Electric as well as Google using whole /32 block for IPv6. So is /32 is standard? We have only one /32 allocation from ARIN and thus if using /32 seems like hard deal - we have to likely get another /32 just for anycasting? or we can use /48 without issues? Also, is /48 a good number for breaking /32 so that we can do /48 announcements from different datacenters in simple uni casting setup? I apologize for any wrong questions/logic - really new to this. Please correct me if I am wrong on any concept. Appreciate your help. Thanks. -- Anurag Bhatia anuragbhatia.com or simply - http://[2001:470:26:78f::5] if you are on IPv6 connected network! Twitter: @anurag_bhatia <https://twitter.com/#!/anurag_bhatia> Linkedin: http://linkedin.anuragbhatia.com
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello, Anurag. On Mar 8, 2012, at 9:51 PM, Anurag Bhatia wrote:
1. For anycasting does announcing a /24 from different ASNs (of different datacenters) makes sense or it will be an issue to have a block being announced from different ASNs?
Keeping a consistent announcing ASN for your prefix is thought to be best-practice, and if you don't do so, eventually there will be people who will undoubtedly complain, but there is no technical difficulty with announcing your same prefix from multiple origin ASNs. Any difficulties you encounter will be because of people aggressively filtering what they choose to listen to.
2. We plan to use this anycasting based setup for DNS during initial few months. Assuming low traffic for DNS say ~10Mbps on average (on 100Mbps port) and transit from just single network (datacenter itself) - is this setup OK for simple software based BGP like Quagga or Bird?
Yes, and in fact, that's how nearly all large production anycast networks are built… Each anycast instance contains its own BGP speaker, which announces its service prefix to adjacent BGP-speaking routers, whether those be your own, or your transit-provider's. Doing exactly as you describe is, in fact, best-practice.
3. IPv6! - Is /32 is standard? We have only one /32 allocation from ARIN and thus if using /32 seems like hard deal - we have to likely get another /32 just for anycasting? or we can use /48 without issues? Also, is /48 a good number for breaking /32 so that we can do /48 announcements from different datacenters in simple uni casting setup?
A /48 is quite reasonable. Announcing a whole /32 just for your anycast service would be wasteful. Good luck! -Bill -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org iQIcBAEBCAAGBQJPWakoAAoJEG+kcEsoi3+HQp8P+gORNJ9KCQ4kd303Nuu5TSo2 yqxU7U14hRNRLalPyARRN+up6iJIddiL9e/7BmJFkfWSiY2VacJvrZAx/JUBoVCt FNP32EpBO8Jci9ix3oLR2zm76c3yaG6du/SDfjyZQ2CZMvz43cjCuwoUbHVK74wt 8LXM6LuyfIIOkFYfRyKs1uSQRNX/+y9rRm8m+tYn35WoJGZ2ZM1A8+CFov00EWoP 5CZGJK9Q5Lw3aCArQFcXWE25WRGyi20K74bkNyd/PtO6BMVnKKlp/StgKNSPR4BN F4buwhLwxDE4/JH+PX6Xfm9Ol6ty9YFRpAU47iH2QqJmCZuriAGfkiMZEczHGT/w k6lfcH0e2aqwLY7U3otzPPvxT0qe0gNO87SH+Ej064i+aesECmvru4ZGyFr1jkkl Ai1zfAKBn0ZMGtsfAF2hFkDLsKVlEdia0HDAfaNIdSoOVMeoV7WA/xmBPdQbZ8pk n7SC3MPgVoVt4ZOT+6GSSv9So+oDhcA91N/IdmQ2S9tXX1LCk7b7561u/Nh9pJNV lYG8tlZ9xc3BqSxprA/r8XGgUS6k3y2QjufzoyS7JwfFdzGj7H1d4x/vTGXSVfun eAuC2BKwXtjIujxdl+7wIwE6RzuHUNeEg1gO/kKNvVVAH3FkU4IN0c8sIbdwKvHa Gq2Z1Gt9YBn7JkjcfNmB =4Yhn -----END PGP SIGNATURE-----
Bill, woody@pch.net (Bill Woodcock) wrote:
2. We plan to use this anycasting based setup for DNS during initial few months. Assuming low traffic for DNS say ~10Mbps on average (on 100Mbps port) and transit from just single network (datacenter itself) - is this setup OK for simple software based BGP like Quagga or Bird?
Yes, and in fact, that's how nearly all large production anycast networks are built??? Each anycast instance contains its own BGP speaker, which announces its service prefix to adjacent BGP-speaking routers, whether those be your own, or your transit-provider's. Doing exactly as you describe is, in fact, best-practice.
Well, let's say, using Quagga/BIRD might not really be best practice for everybody... (e.g., *we* are using Cisco equipment for this) Using anycasting for DNS is, to my knowledge, best practice nowadays.
3. IPv6! - Is /32 is standard? We have only one /32 allocation from ARIN and thus if using /32 seems like hard deal - we have to likely get another /32 just for anycasting? or we can use /48 without issues? Also, is /48 a good number for breaking /32 so that we can do /48 announcements from different datacenters in simple uni casting setup?
A /48 is quite reasonable. Announcing a whole /32 just for your anycast service would be wasteful.
Why? It's simply another prefix, no matter how big. It might look wasteful, but if *that* is the allocation you *have*, it's the one you ought to use. One should be careful - people do filter on allocation lengths, so breaking out a /48 out of a /32 allocation and advertising it on its own can lead to it being filtered. Elmar.
On Mar 9, 2012, at 12:11 AM, Elmar K. Bins wrote:
3. IPv6! - Is /32 is standard? We have only one /32 allocation from ARIN and thus if using /32 seems like hard deal - we have to likely get another /32 just for anycasting? or we can use /48 without issues? Also, is /48 a good number for breaking /32 so that we can do /48 announcements from different datacenters in simple uni casting setup?
A /48 is quite reasonable. Announcing a whole /32 just for your anycast service would be wasteful.
Why? It's simply another prefix, no matter how big. It might look wasteful, but if *that* is the allocation you *have*, it's the one you ought to use.
One should be careful - people do filter on allocation lengths, so breaking out a /48 out of a /32 allocation and advertising it on its own can lead to it being filtered.
if you know anyone who is filtering /48 , you can start telling them to STOP doing so as a good citizen of internet6. I agree with Woody anything more than /48 for anycast is waste. mehmet
On 03/09/2012 12:11 AM, Elmar K. Bins wrote:
Bill,
woody@pch.net (Bill Woodcock) wrote:
2. We plan to use this anycasting based setup for DNS during initial few months. Assuming low traffic for DNS say ~10Mbps on average (on 100Mbps port) and transit from just single network (datacenter itself) - is this setup OK for simple software based BGP like Quagga or Bird? Yes, and in fact, that's how nearly all large production anycast networks are built??? Each anycast instance contains its own BGP speaker, which announces its service prefix to adjacent BGP-speaking routers, whether those be your own, or your transit-provider's. Doing exactly as you describe is, in fact, best-practice. Well, let's say, using Quagga/BIRD might not really be best practice for everybody... (e.g., *we* are using Cisco equipment for this) Actually there is a *very* good reason why many (most?) anycast instances use quagga/BIRD/gated/etc to speak bgp (or even ospf for internal anycast) which using a Cisco (or any separate router) usually won't accomplish.
-- Pete
Using anycasting for DNS is, to my knowledge, best practice nowadays.
Re Bill, pete@altadena.net (Pete Carah) wrote:
Well, let's say, using Quagga/BIRD might not really be best practice for everybody... (e.g., *we* are using Cisco equipment for this) Actually there is a *very* good reason why many (most?) anycast instances use quagga/BIRD/gated/etc to speak bgp (or even ospf for internal anycast) which using a Cisco (or any separate router) usually won't accomplish.
Please enlighten me... Elmar.
On Mar 9, 2012, at 1:01 AM, Pete Carah wrote:
Well, let's say, using Quagga/BIRD might not really be best practice for everybody... (e.g., *we* are using Cisco equipment for this) Actually there is a *very* good reason why many (most?) anycast instances use quagga/BIRD/gated/etc to speak bgp (or even ospf for internal anycast) which using a Cisco (or any separate router) usually won't accomplish.
I've done this two ways. I've used Quagga to announce routes directly from the anycast servers. This guarantees you that the route will go away if the server completely goes away, and that traffic will be directed elsewhere. It also allows you to run scripts on the servers that can withdraw the routes in other circumstances, such as if a script running on the server detects that the server is non-responsive (or overloaded). I've used load balancers in front of the name servers. Like Quagga running directly on the server, a load balancer can withdraw routes when all servers behind it stop responding. It has some advantages, in that it can withdraw routes to non-responsive servers even in cases where the server may be too confused to detect its own problems and send the appropriate messages to Quagga. It can spread load among a larger collection of servers than a router would be able to on its own, sit in front of the servers and do rate limiting, and things like that. It could help with the overload issue Bill mentions by selectively sending some queries to other sites without the all or nothing effect you get from a BGP route withdrawal. On the other hand, load balancers aren't cheap, and and once installed in the middle of a network they become one more device to fail. I have no idea what Cisco equipment Elmar is using, but I wouldn't jump to the conclusion that it can't withdraw routes when needed. -Steve
Steve Gibbard (scg) writes:
I have no idea what Cisco equipment Elmar is using, but I wouldn't jump to the conclusion that it can't withdraw routes when needed.
Wouldn't the dns bit of ip sla do most of what's needed on IOS ? http://www.cisco.com/en/US/docs/ios/12_4/ip_sla/configuration/guide/hsdns.ht... some interesting examples at www.cisco.com/web/CA/events/pdfs/CNSF2011-Automations_for_Monitoring_and_Troubleshooting_your_Cisco_IOS_Network-Dan_Jerome.pdf (slide 29 and onwards) Note: this is more of a question than an assertion, I've used quagga/ospfd for DNS anycasting within ISPs, and a script to monitor the nameserver response, but I'd love to hear what people are doing that's not host based.
Morn' Steve, scg@gibbard.org (Steve Gibbard) wrote:
I have no idea what Cisco equipment Elmar is using, but I wouldn't jump to the conclusion that it can't withdraw routes when needed.
We use scripts external to both the routing platform and the service delivery platform to check the service and reconfigure L3 equipment (which is all kinds of L3 capable hardware). Elmar.
Hello everyone Thought to re-open to this thread and discuss couple of doubts I have in mind regarding the same. I tried doing anycasting with 3 nodes, and seems like it didn't worked well at all. It seems like ISPs prefer their own or their customer route (which is our transit provider) and there is almost no "short/local route" effect. I am presently using Charter + Cogent/Gblx + Tinet for this setup. I was looking for some advise over this issue: 1. How to deal with ISPs not prefering local node on their peers network and going to far off node on their own/or customer network? Is it just normal and there's no fix by any BGP announcement method/conf file parameter? Should one prefer taking transit for all locations from same ISP? 2. I am using Quagga on all instances. Any advise on configuration file parameters specifically for anycasting instance? (I would be happy to share existing conf. is required). I did tried AS path prepending, but seems like it didn't helped much (or may be I fail to get it working). What are possible parameters one can have in conf for anycasting case (BGP MED's)? 3. Is putting singlehommed nodes with no peering and transit from single ISP was poor idea? I wonder how other small players do it? Do you take multiple upstreams for all anycasting DNS nodes? Though route pull off is pretty much instance, and I see no problems with it. Also, I have not got the node in EU up yet (which will be under colo's network which is under Telia). I guess since distance in EU with US is way higher then between our domestic US nodes, so probably I will see local/near routing effect with EU server. But still clueless on how to get it working in current setup. Appreciate everyone's comments and help. Thanks. -- Anurag Bhatia anuragbhatia.com or simply - http://[2001:470:26:78f::5] if you are on IPv6 connected network! Linkedin <http://in.linkedin.com/in/anuragbhatia21> | Twitter<https://twitter.com/anurag_bhatia>| Google+ <https://plus.google.com/118280168625121532854>
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On Jun 3, 2012, at 12:35 PM, Anurag Bhatia wrote:
I tried doing anycasting with 3 nodes, and seems like it didn't worked well at all. It seems like ISPs prefer their own or their customer route (which is our transit provider) and there is almost no "short/local route" effect.
Correct. That's why you need to use the same transit providers at each location. http://www.pch.net/resources/papers//dns-service-architecture/dns-service-ar... Slides 20-29. -Bill -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org iQIcBAEBCAAGBQJPy9MWAAoJEG+kcEsoi3+HJkkP/jP9ZVGlIs53qs93jWth6tjt M/zvVBs8EhJrn9RQ9VxfJ1VNmJ4cFILV52PGXarDhN7Gkc/fke0mh3Jbb8rOh9k9 jhFFV0QuiF/vda4QNskzXJGvWTkdl0vhq5BMqcKOLpk2zVMBvvNJziEoFgpxeXaz ghlgDxOGZ1Fq/sSgQndfx/bYPBOq/N5zkfsNQSW8CSrHwuuXIW3C/XgEbHLEbUGG r4w0vtrdtjrrlYs301YjTVXcFPw4Xs/byor+Sqf8XvIOiQbgAe0Ap9p3/kBHjAZf aKiTOncqCHgzgP4hJ3elvyd8agkLsGo2kvyReCxAho8lGjAbK/IYzj4npIh0JXj+ LPXjQgDpvq42ly3TJQsugiHHY98sqesDzKe6aQqruChDEqSctL3r8u5F19Nm39Pu 6WIGC6UbJVDol96BVqkTbfMKtbHuzRij2jsc+Dd0GkOLy2Zy095weT1xOh/TDQCj QIN8u4BDFTk+KxVb86a/mWKmcD4lfM6IbOOR8dg2DWmnNlTF2+4DRz2WjYhnqQwg NN1otUVVMrdIAOa5RJSVOJ5Q3R1AK93vCK4QGYrHCs8sw4GMtieqVS4Q1I8Hn28v gKm7POiArujZkzOcQHmyo28zClRrzKkL1Z1P2wJOvos2briuJhhyCeaDaWU0ux3R 5EmMxRpTvCsm6MzEvQkI =D+uQ -----END PGP SIGNATURE-----
On Jun 3, 2012, at 2:11 PM, Bill Woodcock wrote:
On Jun 3, 2012, at 12:35 PM, Anurag Bhatia wrote:
I tried doing anycasting with 3 nodes, and seems like it didn't worked well at all. It seems like ISPs prefer their own or their customer route (which is our transit provider) and there is almost no "short/local route" effect.
Correct. That's why you need to use the same transit providers at each location.
It could be a nightmare to try to balance the traffic when you are using different providers. You can go ahead and try using path prepending but you will always find some strangeness going on regardless. As Bill mentioned using the same transit will help, especially if you use a transit provider that has some communities pre-defined which will allow you to automatically advertise or not (even geographically) , and path prepend by simply sending communities out , you will save lots of time. Mehmet
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On Mar 9, 2012, at 12:11 AM, Elmar K. Bins wrote:
Well, let's say, using Quagga/BIRD might not really be best practice for everybody... (e.g., *we* are using Cisco equipment for this)
How does your Cisco know whether an adjacent nameserver is heavily loaded, and adjust its BGP announcements accordingly? -Bill -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org iQIcBAEBCAAGBQJPWcigAAoJEG+kcEsoi3+HTy0QAJlt5Sy/uDKFL+JY8ebMReYC DA5bmtu/mPCzoi9dCQYmm5SeGaPAPE+1idmQE2iXJ8j/MeE+2W13jnna9aQQQuGi z1P9ZX98gSoJ1CcwOuNQ79wO+Uzi6vGnFMa1sjAP4ZhxsgOUXRqyWAv0VM0JFJCT yW8vfoK2DpTD2E9zTntRJ4139jSxNr6lQjo5AqwjWeqbKxT2CfHZmX040dAe/nJd LTWyXnPn7HxYbUVMitYZ4hYD99VVdT3Pq9ufOUGMHgDECxGlXoJ3Ynrif80Pk0iT QtyU7Rk6kufBT5sFYkjysyzfhWxNtPD34bjz5sj9tMQ4rwb+KgtEHWOiIkUs0ET3 ZqiZOy6n3ecLq+IkayO/37vol9LdLdev3nNBE3sOrFZITnR/39wAaT/7x5yusU+N NbEXAum4WJt8pIbpkyxCTBFMXxJ0MhNYvMRhqbm/1SCtvC5Dw6mPLIZnYG0UEn8j 0jyEVQ3jJz+l6ID0FBgXZVdCMMcafpCnm+A50xOd1Gsw+5ojqWqJ/Lqd7Rp4XcgD FJejwt4Qtu+L5q8LMu96R9ohGg8Uqx9CBz3qDAB9X7Xipx3bWYFlDJM5Pf832VH5 3W0GZKGCqWmtHWBmzZAJNTySKsHYfStqzVMnwXDsPBQGm2ScKS44t+v56FGcHu29 izRP8sni+zNBKsTB5x2h =Ltey -----END PGP SIGNATURE-----
Re Bill, woody@pch.net (Bill Woodcock) wrote:
Well, let's say, using Quagga/BIRD might not really be best practice for everybody... (e.g., *we* are using Cisco equipment for this) How does your Cisco know whether an adjacent nameserver is heavily loaded, and adjust its BGP announcements accordingly?
It doesn't have to. I don't know how you guys do it, but we take great care to keep min. 70% overhead capacity during standard operation. Elmar.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On Mar 9, 2012, at 1:34 AM, Elmar K. Bins wrote:
Re Bill,
woody@pch.net (Bill Woodcock) wrote:
Well, let's say, using Quagga/BIRD might not really be best practice for everybody... (e.g., *we* are using Cisco equipment for this) How does your Cisco know whether an adjacent nameserver is heavily loaded, and adjust its BGP announcements accordingly?
It doesn't have to.
I don't know how you guys do it, but we take great care to keep min. 70% overhead capacity during standard operation.
RFC 2870 section 2.3 suggests 33%. How us guys do it is 2%-3%, since "standard operation" is only the case when nobody's getting DDoSed. And then we have a backup plan, which is to be able to redirect queries away from nodes that are overloaded. And we have backup plans for the backup plans. But then, we've been doing anycast DNS for twenty years now, so we've had some time to develop those plans. I think what you're hearing from other people, though, is that having a backup plan is, indeed, best practice. -Bill -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org iQIcBAEBCAAGBQJPWdE1AAoJEG+kcEsoi3+H2ZsP/2pkKogGXo2THXS4sMPusDdn FdsnWZIk2KDfFdwko7o135Uiv6Lkr9SeuBsFtohbq05Odo6BU1U/KBXWcwiWB/2y umk390F0mgKDx0A0S5TPCwgKFQW+u2ynKCsXGMHIvbn+iTWvBrBaV2XGeF1ukU1H xWqJcXk42GQA7lnqH7vc8HN+SW8Ill9MZp6vqC9ZnWzQ6VyMzZsPWDWPIddgLIhr vvS5lLCGUdUzqkw/dKXBaQrj9UpjipfQrHx4rOd2M1ULVXngsU1MWxvKpSh3HZZz 68m7Z8J/120NrJ3kthQg/YQJTBG01CYP5pkBYVfB/X7TaYYvFEOtyO57VNEZXNyr Km1lkUd/iYrwx/+YCQf4TH7h3hfgvC21lwsp6RRhvGkQcBA8Fs8VPUbrschbcU8f FilndHewhX4zhCNTBhGoeZOAyACOYYib8JwaUOft2JEC40O3NvPjqWXjhK52gpX0 pAhprGo4oDnDGyM6PmO8b5qDdGRA4hyxZq3NwUj+4PI4Lylq34PUE9T2QQVBfRtT 8pKEOyRHgvrmmiYF8Lsvxc2iAze9SZouNqZ7gy1QJ7aikK6LKMp8GQrtgO52AkKm +wYpIaOKpbscjuBpKGNu331R0ula02TCy6eB75rnbcEd0oDQu14bKwyea6ORl/dh yRV2lOxCX4oCYYW1yNHd =Ushc -----END PGP SIGNATURE-----
On 03/09/2012 01:34 AM, Elmar K. Bins wrote:
Re Bill,
woody@pch.net (Bill Woodcock) wrote:
Well, let's say, using Quagga/BIRD might not really be best practice for everybody... (e.g., *we* are using Cisco equipment for this) How does your Cisco know whether an adjacent nameserver is heavily loaded, and adjust its BGP announcements accordingly? It doesn't have to.
I don't know how you guys do it, but we take great care to keep min. 70% overhead capacity during standard operation.
My point had to do with resilience in the face of hardware/OS/software failures in the box providing the service. Bill's has more to do with resilience in the face of other network events (e.g. the upstream for one of the boxes has a DDOS; you cannot reasonably provide enough excess capacity to handle that...) Neither of these is addressed by using a separate router to announce the server's anycast route. (unless somehow the Cisco is providing the anycasted service, which would address my concern but still not Bill's.) Also, Bill is probably talking root (or bigger public) servers whose load comes from "off-site"; the average load characteristics for those are well known but there can be extremes that would be hard to plan for (hint - operating at 30% isn't really good enough, probably not 10% either. Bill (and the other Bill) have pretty good stats for this that I've only glanced at...) And it is easy to see where one of the extremes might hit only one or two of the anycast instances. He implies having the instances talking to each other in background to adjust bgp announcements to maybe help level things. Fortunately at least for the root servers, the redundancy is at two levels and anycast is only one of them. -- Pete
Thanks for guidance everyone! Appreciate it. And yes, I can see another thread running on discussion about /48 - I am listening silently about it. Multiple AS doing anycasting was little concern for me, but now seems good since I can see everyone's suggestion to use single own ASN for anycasting. On Fri, Mar 9, 2012 at 3:25 PM, Pete Carah <pete@altadena.net> wrote:
On 03/09/2012 01:34 AM, Elmar K. Bins wrote:
Re Bill,
woody@pch.net (Bill Woodcock) wrote:
Well, let's say, using Quagga/BIRD might not really be best practice for everybody... (e.g., *we* are using Cisco equipment for this) How does your Cisco know whether an adjacent nameserver is heavily loaded, and adjust its BGP announcements accordingly? It doesn't have to.
I don't know how you guys do it, but we take great care to keep min. 70% overhead capacity during standard operation.
My point had to do with resilience in the face of hardware/OS/software failures in the box providing the service. Bill's has more to do with resilience in the face of other network events (e.g. the upstream for one of the boxes has a DDOS; you cannot reasonably provide enough excess capacity to handle that...) Neither of these is addressed by using a separate router to announce the server's anycast route. (unless somehow the Cisco is providing the anycasted service, which would address my concern but still not Bill's.)
Also, Bill is probably talking root (or bigger public) servers whose load comes from "off-site"; the average load characteristics for those are well known but there can be extremes that would be hard to plan for (hint - operating at 30% isn't really good enough, probably not 10% either. Bill (and the other Bill) have pretty good stats for this that I've only glanced at...) And it is easy to see where one of the extremes might hit only one or two of the anycast instances. He implies having the instances talking to each other in background to adjust bgp announcements to maybe help level things. Fortunately at least for the root servers, the redundancy is at two levels and anycast is only one of them.
-- Pete
-- Anurag Bhatia anuragbhatia.com or simply - http://[2001:470:26:78f::5] if you are on IPv6 connected network! Twitter: @anurag_bhatia <https://twitter.com/#!/anurag_bhatia> Linkedin: http://linkedin.anuragbhatia.com
participants (7)
-
Anurag Bhatia
-
Bill Woodcock
-
Elmar K. Bins
-
Mehmet Akcin
-
Pete Carah
-
Phil Regnauld
-
Steve Gibbard