Hi, Yes I had that issue, it was a firmware problem... and a timed one too :( We had a customer with a few Raid5 of 3 drives, once 1 drive go bad he had about 20m before another drive would. And they where bricked btw, you couldn't just upload the new firmware. Wasn't an happy weekend. ----- Alain Hebert ahebert@pubnix.net PubNIX Inc. 50 boul. St-Charles P.O. Box 26770 Beaconsfield, Quebec H9W 6G7 Tel: 514-990-5911 http://www.pubnix.net Fax: 514-990-9443 On 02/06/13 15:47, Jay Ashworth wrote:
----- Original Message -----
From: "Kristian Kielhofner" <kris@kriskinc.com> Over the year I've read some interesting (horrifying?) tales of debugging on NANOG. It seems I finally have my own to contribute:
http://blog.krisk.org/2013/02/packets-of-death.html
The strangest issue I've experienced, that's for sure. FWIW, I had a similar situation crop up a couple of years ago with *five different* Seagate SATA drives: they grew some specific type of bad spot on the drive which, if you even tried to read it, would *knock the drive adapter off line until powercycle*; even a reboot didn't clear it.
Nice writeup.
Cheers, -- jra