Re: FYI Netflix is down

Jon Lewis wrote:
Really, you need at least three independent providers. One primary (A), one backup (B), and one "witness" to monitor the others for failure. The witness site can of course be low-powered, as it is not in the data plane of the applications, but just participates in the control plane. In the event of a loss of communication, the majority clique wins, and the isolated environments shut themselves down. This is of course how any sane clustering setup has protected against "split brain" scenarios for decades. Doing it the right way makes the cloud far less cost-effective and far less "agile". Once you get it all set up just so, change becomes very difficult. All the monitoring and fail-over/fail-back operations are generally application-specific and provider-specific, so there's a lot of lock-in. Tools like RightScale are a step in the right direction, but don't really touch the application layer. You also have to worry about the availability of yet another provider! -- RPM

On Tue, Jul 3, 2012 at 1:00 PM, Ryan Malayter <malayter@gmail.com> wrote:
I am pretty sure Netflix and others were "trying to do it right", as they all had graceful fail-over to a secondary AWS zone defined. It looks to me like Amazon uses DNS round-robin to load balance the zones, because they mention returning a "list" of addresses for DNS queries, and explains the failure of the services to shunt over to other zones in their postmortem.
http://aws.amazon.com/message/67457/
http://www.wired.com/wiredenterprise/2012/06/real-clouds-crush-amazon/ I am a big believer in using hardware to load balance data centers, and not leave it up to software in the data center which might fail. Speaking of services like RightScale, Google announced Compute Engine at Google I/O this year. BuildFax was an early Adopter, and they gave it great reviews... http://www.youtube.com/watch?v=LCjSJ778tGU It looks like Google has entered into the VPS market. 'bout time... ;-] http://cloud.google.com/products/compute-engine.html --steve pirk

On Jul 8, 2012, at 7:27 PM, "steve pirk [egrep]" <steve@pirk.com> wrote:
I am pretty sure Netflix and others were "trying to do it right", as they all had graceful fail-over to a secondary AWS zone defined.
Having a single company as an infrastructure supplier is not "trying to do it right" from an engineering OR business perspective. It's lazy. No matter how many "availability zones" the vendor claims.

On Sun, Jul 8, 2012 at 8:27 PM, steve pirk [egrep] <steve@pirk.com> wrote:
There are also bugs from the Netflix side uncovered by the AWS outage: "Lessons Netflix Learned from the AWS Storm" http://techblog.netflix.com/2012/07/lessons-netflix-learned-from-aws-storm.h... For an infrastructure this large, no matter you are running your own datacenter or using the cloud, it is certain that the code is not bug free. And another thing is, if everything is too automated, then failure in one component can trigger bugs in areas that no one has ever thought of... Rayson ================================================== Open Grid Scheduler - The Official Open Source Grid Engine http://gridscheduler.sourceforge.net/

On Mon, Jul 9, 2012 at 15:50 UTC, Rayson Ho wrote:
"We continue to investigate why these connections were timing out during connect, rather than quickly determining that there was no route to the unavailable hosts and failing quickly." potential translation: "We continue to shoot ourselves in the foot by filtering all ICMP without understanding the implications." Cheers, Dave Hart

On Mon, Jul 9, 2012 at 10:20 AM, Dave Hart <davehart@gmail.com> wrote:
Sorry to mention my favorite hardware vendor again, but that is what I liked about using F5 BigIP as load balancing devices... They did layer 7 url checking to see if the service was really responding (instead of just pinging or opening a connection to the IP). We performed tests that would do a complete LDAP SSL query to verify a directory server could actually look up a person. If it failed to answer within a certain time frame, then it was taken out of rotation. I do not know if that was ever implemented in production, but we did verify it worked. On the "software in the hardware can fail" point, my only defense is you do redundant testing of the watcher devices, and have enough of them to vote misbehaving ones out of service. Oh, and it is best if the global load balancing hardware/software is located somewhere else besides the data centers being monitored. -- steve pirk
participants (4)
-
Dave Hart
-
Rayson Ho
-
Ryan Malayter
-
steve pirk [egrep]