Most energy efficient (home) setup
After reading a number of threads where people list their huge and wasteful, but undoubtedly fun (and sometimes necessary?), home setups complete with dedicated rooms and aircos I felt inclined to ask who has attempted to make a really energy efficient setup? This may be an interesting read, it uses a plugcomputer: http://www.theregister.co.uk/2010/11/11/diy_zero_energy_home_server/page2.ht... Admittedly I don't have a need for a full blown home lab since I am not a network engineer, I'm more of a sysadmin/network admin/programmer kind of person... So I can make do with a somewhat minimal set up. But I *do* have tunneled IPv6 from home ;-) In my current apartment in addition to an el cheapo DSL modem that probably wastes about 10 watts and a "sometimes on" PC workstation I used to have an always on thinkpad (early 2000s model) as my main desktop system and an always on G4 system (pegasos2 in case you care) acting as a mail/web/ssh server. The thinkpad was a refurbished model and it was quite stable, up to 500 days of uptime during its last years. But the hardware slowly disintegrated and when the gfx card died I retired it. Right now my always on server is a VIA artigo 1100 pico-itx system (replacing the G4 system) and my "router/firewall/modem" is still the el cheapo DSL modem (which runs busybox by the way). I have an upgraded workstation that's "sometimes on", it has a mini itx form factor (AMD phenom2 CPU). I use debian on all systems. I haven't measured it but I think if the set up would use 30 watts continuously (only taking the always on systems into account) it'd be a lot. Of course it'll spike when I fire up the workstation. It's not extremely energy efficient but compared to some setups I read about it is. The next step would be to migrate to a plugcomputer or something similar (http://plugcomputer.org/). Any suggestions and ideas appreciated of course. :-) Thanks, Jeroen -- Earthquake Magnitude: 3.0 Date: Wednesday, February 22, 2012 13:57:33 UTC Location: Island of Hawaii, Hawaii Latitude: 19.4252; Longitude: -155.3207 Depth: 3.90 km
Right now my always on server is a VIA artigo 1100 pico-itx system (replacing the G4 system) and my "router/firewall/modem" is still the el cheapo DSL modem (which runs busybox by the way). I have an upgraded workstation that's "sometimes on", it has a mini itx form factor (AMD phenom2 CPU). I use debian on all systems.
I haven't measured it but I think if the set up would use 30 watts continuously (only taking the always on systems into account) it'd be a lot. Of course it'll spike when I fire up the workstation.
It's not extremely energy efficient but compared to some setups I read about it is. The next step would be to migrate to a plugcomputer or something similar (http://plugcomputer.org/).
Any suggestions and ideas appreciated of course. :-)
You want truly energy efficient but not too resource limited like the Pogoplug and stuff like that? Look to Apple's Mac mini. The current Mac mini "Server" model sports an i7 2.0GHz quad-core CPU and up to 16GB RAM (see OWC for that, IIRC). Two drives, up to 750GB each, or SSD's if you prefer. 12 frickin' watts when idle. Or thereabouts. Think about 40 watts when running full tilt, maybe a bit more. In the more-realistically-server-grade department, we've built some really nice Supermicro based E3-1230's, 16-32GB, 6x GigE, RAID, six- to eight 2.5" SSD's and Seagate Momentus XT hybrid drives, idle around 60 watts and peak around 100. We've virtualized loads of older boxes onto some of those with good-to-great success. Two of those can replace what took a rackful of machines a decade ago. Quite frankly, I think most of the "little server" stuff is a bit questionable. We picked up a ProLiant Microserver N36L a while back for NAS use, but quite frankly I'm un-blown-away by its 35 watt baseline performance, when for 45 watts I can get an E3-1230 with 16GB of RAM, run ESXi, and run stuff alongside a NAS VM. (The 60 watt figure is for a more loaded-up-with-stuff box) Which brings me to the point: for energy efficient home use, you might want to consider a slightly larger/more expensive machine and virtualization. It doesn't have to be an ESXi host. It seems like you can run two or three other servers on a Mac mini Server with stuff like VMware Fusion without stressing things too much, and that might put you in the 20-30 watt range for a flexible setup. You also don't have to buy a MMS; the lower end Mac mini's are also plenty powerful, can be upgraded similarly, but lack OS X Server and the quad core CPU. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Am 22.02.2012 um 22:48 schrieb Joe Greco:
You also don't have to buy a MMS; the lower end Mac mini's are also plenty powerful, can be upgraded similarly, but lack OS X Server and the quad core CPU.
With 10.7, Server is now a $50 add-on download from the Mac App Store, no special hardware required. Stefan -- Stefan Bethke <stb@lassitu.de> Fon +49 151 14070811
Am 22.02.2012 um 22:48 schrieb Joe Greco:
You also don't have to buy a MMS; the lower end Mac mini's are also plenty powerful, can be upgraded similarly, but lack OS X Server and the quad core CPU.
With 10.7, Server is now a $50 add-on download from the Mac App Store, no special hardware required.
I also haven't found it to be particularly *good* at anything; I'm not an OS X guy, and maybe that's part of the problem, but I found Snow Leopard Server a lot more comprehensible in a "this seems really un-Apple- like but at least it makes some sort of sense" way. OS X Lion Server feels like someone just bolted on random bits of server management stuff. If you've ever managed a server with a poorly integrated control panel, it reminds me a little of that. I believe that there are plenty of people who ditch OS X entirely and do other things with them. I wish ESXi would run on them. I could see *uses* for that. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On 22 Feb 2012, at 22:04, "Stefan Bethke" <stb@lassitu.de> wrote:
Am 22.02.2012 um 22:48 schrieb Joe Greco:
You also don't have to buy a MMS; the lower end Mac mini's are also plenty powerful, can be upgraded similarly, but lack OS X Server and the quad core CPU.
With 10.7, Server is now a $50 add-on download from the Mac App Store, no special hardware required.
You dudes need to get with the times and put all this stuff in the cloud. Ok so I joke a little.. But I did move a load of stuff from a couple of home servers to some VMs and it works fine. Less to mess around with and prob cheaper too. The only thing I keep at home now is storage. -- Leigh Porter ______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com ______________________________________________________________________
Leigh Porter wrote:
You dudes need to get with the times and put all this stuff in the cloud. Ok so I joke a little..
The "cloud" seems to be a more modern implementation of the mainframe "paradigm" (and now I feel soiled having used 2 such words in one sentence). It has its uses, though it's interesting to see how things go full circle. I predict a move away from "the cloud" in about a decade, give or take.
The only thing I keep at home now is storage.
I do have a few virtual private servers (and use them) and have set up a few VPS serving servers myself. However it's fun to tinker with hardware and if I'd migrate as much as possible to VPS systems it'd take a big chunk of the fun out of it. As a side note, the main reasons for me to have a more energy efficient setup is not to "go green" (there are better ways for that) but because it is a fun challenge, I dislike paying bigger bills, and I hate the clutter and the noise a big setup brings. Greetings, Jeroen -- Earthquake Magnitude: 3.2 Date: Wednesday, February 22, 2012 22:00:06 UTC Location: Central Alaska Latitude: 62.0453; Longitude: -152.4945 Depth: 10.90 km
On 22 Feb 2012, at 22:40, "Jeroen van Aart" <jeroen@mompl.net> wrote:
Leigh Porter wrote:
You dudes need to get with the times and put all this stuff in the cloud. Ok so I joke a little..
The "cloud" seems to be a more modern implementation of the mainframe "paradigm" (and now I feel soiled having used 2 such words in one sentence). It has its uses, though it's interesting to see how things go full circle. I predict a move away from "the cloud" in about a decade, give or take.
Or sooner when people realise that anything not locked away on an box at home is being routinely nosed at for thought crime and illegal quotations or something or other..
I do have a few virtual private servers (and use them) and have set up a few VPS serving servers myself. However it's fun to tinker with hardware and if I'd migrate as much as possible to VPS systems it'd take a big chunk of the fun out of it.
Yeah it does, I wish I had time for the fun of it! -- Leigh ______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com ______________________________________________________________________
On 22 Feb 2012, at 22:04, "Stefan Bethke" <stb@lassitu.de> wrote:
Am 22.02.2012 um 22:48 schrieb Joe Greco: =20
You also don't have to buy a MMS; the lower end Mac mini's are also plenty powerful, can be upgraded similarly, but lack OS X Server and the quad core CPU. =20 With 10.7, Server is now a $50 add-on download from the Mac App Store, n= o special hardware required. =20
You dudes need to get with the times and put all this stuff in the cloud.
We are. I'm just putting it in *our* cloud, not some random other company's.
Ok so I joke a little.. But I did move a load of stuff from a couple of ho= me servers to some VMs and it works fine. Less to mess around with and pro= b cheaper too.=20
The only thing I keep at home now is storage.
If you're keeping the storage, run some VM's alongside. Quite frankly, it's a little horrifying how quickly people have embraced not owning their own resources. On one hand, sure, it's great not to have to worry about some aspects of it all, but on the other hand... The web sites that we entrust our data to can, and do, vanish: MySpace. GeoCities. Friendster. Google Videos. Which of those did you predict would eventually fail? The companies we pay to provide us with services screw up: T-Mobile (Microsoft?) Sidekick. Lala. Megaupload. RIM/Blackberry. Arbitrary changes in terms of service: Facebook. Dropbox. Google. You know where I never have to worry about any of that? On gear we own and control. "Cloud" is a crock of hooey buzzword. There's no "cloud." For the average end user, it is the realization that we've farmed out tasks to unknowable servers across the Internet. For the technical user, it's setting up instances of servers in some large hosting company's big data centers. The "cloud" people refer to today is nothing more than the continued evolution of virtualized hosting. There's nothing magic about it. You're trusting your data, your processes, or (most likely) both, to arbitrary other companies whose responsibilities are to their shareholders and whose motives are profits. You have no control over the actual management, must trust that they'll let you know if their security has been breached, and you may never find out if someone's gone snooping. It isn't somehow magic and new because someone calls it "cloud." All this "cloud" stuff? It runs on actual hardware, not up in the sky. And as long as it runs on actual hardware, it'll run faster and better and more responsively on equipment that's less-loaded, better-specced, and much-closer. Sun had it right all those years ago: "The network is the computer." But it doesn't have to be Amazon's network, or Google's network. We *are* the North American Network Operators' Group. The people here are more than just a little clued about this stuff. I'm fine with running Netflix out of the cloud. I can tolerate their occasional outages and problems. I'm fine with running other unimportant stuff out of the cloud. But it looks like it is going to be a long time before I have any real interest in running anything of value out of someone else's "cloud." ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Joe Greco wrote:
Quite frankly, it's a little horrifying how quickly people have embraced not owning their own resources. On one hand, sure, it's great not to have to worry about some aspects of it all, but on the other hand...
The web sites that we entrust our data to can, and do, vanish:
You know where I never have to worry about any of that? On gear we own and control.
"Cloud" is a crock of hooey buzzword. There's no "cloud." For the average end user, it is the realization that we've farmed out tasks to
Sun had it right all those years ago: "The network is the computer." But it doesn't have to be Amazon's network, or Google's network. We *are* the North American Network Operators' Group. The people here are more than just a little clued about this stuff.
I wholeheartedly agree. I couldn't have said it better :-) -- Earthquake Magnitude: 3.2 Date: Wednesday, February 22, 2012 22:00:06 UTC Location: Central Alaska Latitude: 62.0453; Longitude: -152.4945 Depth: 10.90 km
--As of February 22, 2012 3:48:42 PM -0600, Joe Greco is alleged to have said:
Right now my always on server is a VIA artigo 1100 pico-itx system (replacing the G4 system) and my "router/firewall/modem" is still the el cheapo DSL modem (which runs busybox by the way). I have an upgraded workstation that's "sometimes on", it has a mini itx form factor (AMD phenom2 CPU). I use debian on all systems.
I haven't measured it but I think if the set up would use 30 watts continuously (only taking the always on systems into account) it'd be a lot. Of course it'll spike when I fire up the workstation.
It's not extremely energy efficient but compared to some setups I read about it is. The next step would be to migrate to a plugcomputer or something similar (http://plugcomputer.org/).
Any suggestions and ideas appreciated of course. :-)
You want truly energy efficient but not too resource limited like the Pogoplug and stuff like that? Look to Apple's Mac mini.
The current Mac mini "Server" model sports an i7 2.0GHz quad-core CPU and up to 16GB RAM (see OWC for that, IIRC). Two drives, up to 750GB each, or SSD's if you prefer.
--As for the rest, it is mine. There is an intermediate step as well; something along the lines of an ALIX or Fit-PC (or Netgate) board. These are boards designed for embedded/network applications, mostly. (Although the Fit-PC looks to be more of a thin client desktop.) Depending on the use, one can run a decent home server on one, or even a lightweight *nix desktop. Most of these don't actually specify what they use, power-wise; they just list what power supply is included. Fit-PC advertises that it runs at .5 watts for standby, 8 watts fully loaded. Many of the others are probably similar, depending on how powerful they actually are, and how you configure them. Daniel T. Staal --------------------------------------------------------------- This email copyright the author. Unless otherwise noted, you are expressly allowed to retransmit, quote, or otherwise use the contents for non-commercial purposes. This copyright will expire 5 years after the author's death, or in 30 years, whichever is longer, unless such a period is in excess of local copyright law. ---------------------------------------------------------------
On 02/22/12 21:13, Jeroen van Aart wrote:
I felt inclined to ask who has attempted to make a really energy efficient setup?
My current always-on home server is: - 3U rackmount box, Supermicro H8SGL, 450 watt '80-plus platinum' PSU - 8-core Opteron 6128 _underclocked_ to 800Mhz - 16 GB of ECC DDR2 - 8x 2TB SATA 'green' drives from assorted manufacturers - all fans replaced with near-silent Noctua models - 2 additional gigE ports (4 total) I run a few VMs on it to compartmentalize things a bit; the host and most VMs run gentoo-amd64-hardened, virtualized with Qemu-KVM. Host OS routes/firewalls. One VM is boot and NFS-root server for a couple diskless workstations around the house. Another VM runs ntpd, local DNS, HTTP forward proxy, shell, dev tools, etc. Another boots FreeBSD and runs only Postrgres. The box is also my home music & movie system, runs motion-detection software to record video from a security camera, and logs weather and other sensor data. The server burns 170 to 190 watts in normal use, up to about 280 peak (movie or music playing + diskless workstations running + compiling stuff + video recording + backing up laptop). This is certainly not low absolute energy consumption compared to some of the other things mentioned in this thread, but it also does a lot more, so it might be more efficient, depending on situation. Consider also that the diskless workstations only use around 35 W each (including monitor), and these are the primary personal computers in my home. In 2011 measured annual energy consumption for the server was 1635 kW*hour, 186 W average. I estimate the diskless workstations use another 270 kW*hour or so annually. -gh/nynex
On Wed, Feb 22, 2012 at 3:48 PM, Joe Greco <jgreco@ns.sol.net> wrote:
The current Mac mini "Server" model sports an i7 2.0GHz quad-core CPU and up to 16GB RAM (see OWC for that, IIRC). Two drives, up to 750GB each, or SSD's if you prefer.
The Mac mini server is quite intringuing with that low power requirement . Unfortunately... 16 GB _Non-ECC_ memory. I sure would not want to run a NAS VM on a server with non-ECC memory that cannot correct single-bit errors, at least with any data I cared much about.. When you have such a large quantity of RAM, single-bit/fade errors caused by background irradiation happen often, although at a fairly low rate. Usually on a workstation it's not an issue, because there is not a massive quantity of idle memory. If you're running this 24x7 with VMs and Non-ECC memory, it's only a question of time, before silent memory corruption results in one of the VMs. And silent memory corruption can make its way to the filesystem, or applications' internal saved data structures (such as the contents of a VM's registry database). True can be partially mitigated with backups; but the idea of VMs blue-screening or ESXi crashing with purple screen every 3 or 4 months sounds annoying.
12 frickin' watts when idle. Or thereabouts. Think about 40 watts when running full tilt, maybe a bit more.
-- -JH
On Wed, Feb 22, 2012 at 3:48 PM, Joe Greco <jgreco@ns.sol.net> wrote:
The current Mac mini "Server" model sports an i7 2.0GHz quad-core CPU and up to 16GB RAM (see OWC for that, IIRC). =A0Two drives, up to 750GB each, or SSD's if you prefer.
The Mac mini server is quite intringuing with that low power requirement . Unfortunately... 16 GB _Non-ECC_ memory. I sure would not want to run a NAS VM on a server with non-ECC memory that cannot correct single-bit errors, at least with any data I cared much about..
When you have such a large quantity of RAM, single-bit/fade errors caused by background irradiation happen often, although at a fairly low rate. Usually on a workstation it's not an issue, because there is not a massive quantity of idle memory.
If you're running this 24x7 with VMs and Non-ECC memory, it's only a question of time, before silent memory corruption results in one of the VMs.
And silent memory corruption can make its way to the filesystem, or applications' internal saved data structures (such as the contents of a VM's registry database).
True can be partially mitigated with backups; but the idea of VMs blue-screening or ESXi crashing with purple screen every 3 or 4 months sounds annoying.
While I don't disagree with the general thought, one could also say it's just a matter of time before your server's power supply fails, or a fan fails, or a hard drive fails. Since we don't hear about Mac mini server users screaming about how their servers are constantly crashing, the severity and frequency of memory corruption events may not be anywhere near what you suggest. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Sun, 15 Apr 2012 01:46:29 -0500, Joe Greco said:
Since we don't hear about Mac mini server users screaming about how their servers are constantly crashing, the severity and frequency of
Googling for 'mac mini server crash' gets about 11.6M hits. I gave up after 10 pages of results, but up till that point most did in fact seem to be about crashes on Mac mini servers (the mail you replied to was on page 8 at the time).
memory corruption events may not be anywhere near what you suggest.
"the severity and frequency of *noticed* memory corruption events". FTFY. (Keep in mind that if the box doesn't have ECC or at least parity, you *won't know* you had a bit fllip until you dereference that memory location. At which point if you're *lucky* you'll get a random crash that forces ou to reboot right away. If you're unlucky, you won't notice till you try to re-mount the disks after a reboot 2-3 months later....)
And silent memory corruption can make its way to the filesystem, or applications' internal saved data structures (such as the contents of a VM's registry database).
Since we don't hear about Mac mini server users screaming about how their servers are constantly crashing, the severity and frequency of memory corruption events may not be anywhere near what you suggest.
ECC is an absolute MUST. Case closed- unless you like corrupt encryption keys that blow away an entire volume.
And silent memory corruption can make its way to the filesystem, or applications' internal saved data structures (such as the contents of a VM's registry database).
Since we don't hear about Mac mini server users screaming about how their servers are constantly crashing, the severity and frequency of memory corruption events may not be anywhere near what you suggest.
ECC is an absolute MUST. Case closed- unless you like corrupt encryption keys that blow away an entire volume.
You might want to go tell that to all those Mac users who have full disk encryption... ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Sun, Apr 15, 2012 at 1:46 AM, Joe Greco <jgreco@ns.sol.net> wrote:
Since we don't hear about Mac mini server users screaming about how Do you hear of lots of Mac mini server users loading up 16GB of RAM?
it's just a matter of time before your server's power supply fails, or
The difference is power supplies don't fail nearly as often as 1-bit DRAM errors, except when subject to harsh conditions. HDD errors are comparably rare also; and yet, the drive surface of any HDD has error correction codes, because disk surfaces are subject to similar problems. Consumer desktop hard drives use non-ECC memory inside the drive for the cache/buffer memory, to save $$$: but it's typically only 12MB or so of memory, so it's approximately 300 days before you have a 50% chance of a single bit error caused by background radiation, and those are good odds, but nevertheless, people get corrupted files, so maybe they aren't that good. Consider that the probability 16GB of SDRAM experiences at least one single bit error at sea level, in a given 6 hour period exceeds 66% = 1 - (1 - 1.3e-12 * 6)^(16 * 2^30 * 8). In any given 24 hour period, the probability of at least one single bit error exceeds 98%. Assuming the memory is good and functioning correctly; It's expected to see on average approximately 3 to 4 1-bit errors per day. More are frequently seen. Now if most of this 16GB of memory is unused, you will never notice that over 30 days, 120 or so bits have been flipped from their proper value.. On the other hand, if you have some filesystem read cache for a NAS VM or database application in the effected space, and moderately important data is being damaged well, that's just plain uncool
... JG
-- -JH
On Sun, 2012-04-15 at 10:52 -0500, Jimmy Hess wrote:
In any given 24 hour period, the probability of at least one single bit error exceeds 98%. Assuming the memory is good and functioning correctly;
It's expected to see on average approximately 3 to 4 1-bit errors per day. More are frequently seen.
Now if most of this 16GB of memory is unused, you will never notice that over 30 days, 120 or so bits have been flipped from their proper value..
Hi, I've been operating 4 desktop PCs with each the following configuration: 16 GB of RAM (4x4GB Kingston) running Linux about 15 VM (KVM) on DRBD disks using more than 10 GB of RAM for nearly a year now in a room without cooling. Over the year I've got one dead HDD and one dead SSD (both replaced) but no data corruption or host or VM crash. Do you have reference to recent papers with experimental data about non ECC memory errors? It should be fairly easy to do (write and read scan memory in a loop) and given your computations you should get bit errors in less than a day. I remember this paper in 2003 but this was using abnormal heat: http://www.cs.princeton.edu/~sudhakar/papers/memerr-slashdot-commentary.html Thanks in advance, Sincerely, Laurent
Laurent GUERBY wrote:
Do you have reference to recent papers with experimental data about non ECC memory errors? It should be fairly easy to do
Maybe this provides some information: http://en.wikipedia.org/wiki/ECC_memory#Problem_background "Work published between 2007 and 2009 showed widely varying error rates with over 7 orders of magnitude difference, ranging from 10−10−10−17 error/bit·h, roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory.[2][4][5] A very large-scale study based on Google's very large number of servers was presented at the SIGMETRICS/Performance’09 conference.[4] The actual error rate found was several orders of magnitude higher than previous small-scale or laboratory studies, with 25,000 to 70,000 errors per billion device hours per megabit (about 3–10×10−9 error/bit·h), and more than 8% of DIMM memory modules affected by errors per year." -- Earthquake Magnitude: 4.9 Date: Wednesday, April 18, 2012 16:21:41 UTC Location: Solomon Islands Latitude: -7.4630; Longitude: 156.7916 Depth: 414.30 km
On 4/18/12 12:35 PM, Jeroen van Aart wrote:
Laurent GUERBY wrote:
Do you have reference to recent papers with experimental data about non ECC memory errors? It should be fairly easy to do Maybe this provides some information:
http://en.wikipedia.org/wiki/ECC_memory#Problem_background
"Work published between 2007 and 2009 showed widely varying error rates with over 7 orders of magnitude difference, ranging from 10−10−10−17 error/bit·h, roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory.[2][4][5] A very large-scale study based on Google's very large number of servers was presented at the SIGMETRICS/Performance’09 conference.[4] The actual error rate found was several orders of magnitude higher than previous small-scale or laboratory studies, with 25,000 to 70,000 errors per billion device hours per megabit (about 3–10×10−9 error/bit·h), and more than 8% of DIMM memory modules affected by errors per year." Dear Jeroen,
In the work that led up to RFC3309, many of the errors found on the Internet pertained to single interface bits, and not single data bits. Working at a large chip manufacturer that removed internal memory error detection to foolishly save space, cost them dearly in then needing to do far more exhaustive four corner testing. Checksums used by TCP and UDP are able to detect single bit data errors, but may miss as much as 2% of single interface bit errors. It would be surprising to find memory designs lacking internal error detection logic. Regards, Douglas Otis
On Apr 18, 2012, at 5:55 32PM, Douglas Otis wrote:
On 4/18/12 12:35 PM, Jeroen van Aart wrote:
Laurent GUERBY wrote:
Do you have reference to recent papers with experimental data about non ECC memory errors? It should be fairly easy to do Maybe this provides some information:
http://en.wikipedia.org/wiki/ECC_memory#Problem_background
"Work published between 2007 and 2009 showed widely varying error rates with over 7 orders of magnitude difference, ranging from 10−10−10−17 error/bit·h, roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory.[2][4][5] A very large-scale study based on Google's very large number of servers was presented at the SIGMETRICS/Performance’09 conference.[4] The actual error rate found was several orders of magnitude higher than previous small-scale or laboratory studies, with 25,000 to 70,000 errors per billion device hours per megabit (about 3–10×10−9 error/bit·h), and more than 8% of DIMM memory modules affected by errors per year." Dear Jeroen,
In the work that led up to RFC3309, many of the errors found on the Internet pertained to single interface bits, and not single data bits. Working at a large chip manufacturer that removed internal memory error detection to foolishly save space, cost them dearly in then needing to do far more exhaustive four corner testing. Checksums used by TCP and UDP are able to detect single bit data errors, but may miss as much as 2% of single interface bit errors. It would be surprising to find memory designs lacking internal error detection logic.
mallet:~ smb$ head -14 doc/ietf/rfc/rfc3309.txt | sed 1,7d | sed 2,5d; date Request for Comments: 3309 Stanford September 2002 Wed Apr 18 23:07:53 EDT 2012 We are not in a static field... (3309 is one of my favorite RFCs -- but the specific findings (errors happen more often than you think), as opposed the general lesson (understand your threat model) may be OBE. --Steve Bellovin, https://www.cs.columbia.edu/~smb
On 4/18/12 8:09 PM, Steven Bellovin wrote:
On Apr 18, 2012, at 5:55 32PM, Douglas Otis wrote:
Dear Jeroen,
In the work that led up to RFC3309, many of the errors found on the Internet pertained to single interface bits, and not single data bits. Working at a large chip manufacturer that removed internal memory error detection to foolishly save space, cost them dearly in then needing to do far more exhaustive four corner testing. Checksums used by TCP and UDP are able to detect single bit data errors, but may miss as much as 2% of single interface bit errors. It would be surprising to find memory designs lacking internal error detection logic.
mallet:~ smb$ head -14 doc/ietf/rfc/rfc3309.txt | sed 1,7d | sed 2,5d; date Request for Comments: 3309 Stanford September 2002
Wed Apr 18 23:07:53 EDT 2012
We are not in a static field... (3309 is one of my favorite RFCs -- but the specific findings (errors happen more often than you think), as opposed the general lesson (understand your threat model) may be OBE.
Dear Steve, You may be right. However back then most were also only considering random single bit errors as well. Although there was plentiful evidence for where errors might be occurring, it seems many worked hard to ignore the clues. Reminiscent of a drunk searching for keys dropped in the dark under a light post, mathematics for random single bit errors offer easier calculations and simpler solutions. While there are indeed fewer parallel buses today, these structures still exist in memory modules and other networking components. Manufactures confront increasingly temperamental bit storage elements, where most include internal error correction to minimize manufacturing and testing costs. Error sources are not easily ascertained with simple checksums when errors are not random. Regards, Douglas Otis
On Apr 19, 2012, at 6:31 43PM, Douglas Otis wrote:
On 4/18/12 8:09 PM, Steven Bellovin wrote:
On Apr 18, 2012, at 5:55 32PM, Douglas Otis wrote:
Dear Jeroen,
In the work that led up to RFC3309, many of the errors found on the Internet pertained to single interface bits, and not single data bits. Working at a large chip manufacturer that removed internal memory error detection to foolishly save space, cost them dearly in then needing to do far more exhaustive four corner testing. Checksums used by TCP and UDP are able to detect single bit data errors, but may miss as much as 2% of single interface bit errors. It would be surprising to find memory designs lacking internal error detection logic.
mallet:~ smb$ head -14 doc/ietf/rfc/rfc3309.txt | sed 1,7d | sed 2,5d; date Request for Comments: 3309 Stanford September 2002
Wed Apr 18 23:07:53 EDT 2012
We are not in a static field... (3309 is one of my favorite RFCs -- but the specific findings (errors happen more often than you think), as opposed the general lesson (understand your threat model) may be OBE.
Dear Steve,
You may be right. However back then most were also only considering random single bit errors as well. Although there was plentiful evidence for where errors might be occurring, it seems many worked hard to ignore the clues.
Reminiscent of a drunk searching for keys dropped in the dark under a light post, mathematics for random single bit errors offer easier calculations and simpler solutions. While there are indeed fewer parallel buses today, these structures still exist in memory modules and other networking components. Manufactures confront increasingly temperamental bit storage elements, where most include internal error correction to minimize manufacturing and testing costs. Error sources are not easily ascertained with simple checksums when errors are not random.
Yes -- that's precisely why I like that RFC so much. --Steve Bellovin, https://www.cs.columbia.edu/~smb
On Sun, Apr 15, 2012 at 10:52:51AM -0500, Jimmy Hess wrote:
Consider that the probability 16GB of SDRAM experiences at least one single bit error at sea level, in a given 6 hour period exceeds 66% = 1 - (1 - 1.3e-12 * 6)^(16 * 2^30 * 8). In any given 24 hour period, the probability of at least one single bit error exceeds 98%. Assuming the memory is good and functioning correctly;
It's expected to see on average approximately 3 to 4 1-bit errors per day. More are frequently seen.
Now if most of this 16GB of memory is unused, you will never notice that over 30 days, 120 or so bits have been flipped from their proper value..
I think that is an overestimate, at least if single-bit (corrected) ecc errors are as common as flipped bits on non-ecc ram. Now, First, count me in the "ECC is a must, full stop." crowd. I insist on ecc for even my customer's dedicated servers, even though most of the customers don't care that much. "It's not for you, it's for me." With ECC? if you have EDAC/bluesmoke setup correctly on a supported motherboard, you get console spew whenever you have a single-bit error. This means I can do a very simple grep on the box conserver logs to and I can find all the failing ram modules I am responsible for. Without ecc, I have no real way of telling the difference between broken software and broken ram. That said, I still think the 120 bits a month estimate is large; I believe that ECC ram should report correctable errors (assuming a correctly configured EDAC/bluesmoke module and supported chipset) about as often as non-ecc ram would get a bit flip. In a past role, I did spend the time grepping through such a properly configured cluster, with tens of thousands of nodes, looking for failing hardware. I should have done a proper paper with statistics, but I did not. The vast majority of servers had zero correctable ecc errors, while a few had a lot, which is consistent with the theory that ECC errors are more often caused by bad ram. (Of course, all these servers were in proper cases in a proper data center, which probably gives you a fair bit of shielding.) On my current fleet (well under 100 servers) single bit errors are so rare that if I get one, I schedule that machine for removal from production.
In a past role, I did spend the time grepping through such a properly configured cluster, with tens of thousands of nodes, looking for failing hardware. I should have done a proper paper with statistics, but I did not. The vast majority of servers had zero correctable ecc errors, while a few had a lot, which is consistent with the theory that ECC errors are more often caused by bad ram.
I'd have to say that that's been the experience here as well, ECC is great, yes, but it just doesn't seem to be something that is "absolutely vital" on an ongoing basis, as some of the other posters here have implied, to correct the constant bit errors that are(n't) showing up. Maybe I'll get bored one of these days and find some devtools to stick on one of the Macs. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
From: Joe Greco [mailto:jgreco@ns.sol.net]
I'd have to say that that's been the experience here as well, ECC is great, yes, but it just doesn't seem to be something that is "absolutely vital" on an ongoing basis, as some of the other posters here have implied, to correct the constant bit errors that are(n't) showing up.
Maybe I'll get bored one of these days and find some devtools to stick on one of the Macs.
In all the years I've been playing with high end hardware, the best sample machine I have is an SGI Origin 200 that I had in production for over ten years, with the only downtime during that time being once to add more memory, once to replace a failed drive, once to move the rack and the occasional OS upgrade (I tended to skip a 6.5.x release or two between updates, and after 6.5.30 there were of course no more). That machine was down less than 24 hours cumulative for that entire period. In that ten year span, I saw TWO ECC parity errors (both single bit correctable). On any machine that saw regular ECC errors it was a sign of failing hardware (usually, but not necessarily the memory, there are other parts in there that have to carry that data too). As much as I prefer ECC, it's not a show stopper for me if it's not there. Jamie
In a message written on Sun, Apr 15, 2012 at 09:54:14PM -0400, Luke S. Crawford wrote:
On my current fleet (well under 100 servers) single bit errors are so rare that if I get one, I schedule that machine for removal from production.
In a previous life, in a previous time, I worked at a place that had a bunch of Cisco's with parity RAM. For the time, these boxes had a lot of RAM, as they had distributed line cards each with their own processor memory. Cisco was rather famous for these parity errors, mostly because of their stock answer: sunspots. The answer was in fact largely correct, but it's just not a great response from a vendor. They had a bunch of statistics though, collected from many of these deployed boxes. We ran the statistics, and given hundreds of routers, each with many line cards the math told us we should have approximately 1 router every 9-10 months get one parity error from sunspots and other random activity (e.g. not a failing RAM module with hundreds of repeatable errors). This was, in fact, close to what we observed. This experience gave me two takeaways. First, single bit flips are rare, but when you have enough boxes rare shows up often. It's very similar to anyone with petabytes of storage, disks fail every couple of days because you have so many of them. At the same time a home user might not see a failure in their lifetime (of disk or memory). Second though, if you're running a business, ECC is a must because the message is so bad. "This was caused by sunspots" is not a customer inspiring response, no matter how correct. "We could have prevented this by spending an extra $50 on proper RAM for your $1M box" is even worse. Some quick looking at Newegg, 4GB DDR3 1333 ECC DIMM, $33.99. 4GB DDR3 1333 Non-ECC DIMM, $21.99. Savings, $12. (Yes, I realize the Motherboard also needs some extra circuitry, I expect it's less than $1 in quantity though). Pretty much everyone I know values their data at more than $12 if it is lost. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
Some quick looking at Newegg, 4GB DDR3 1333 ECC DIMM, $33.99. 4GB DDR3 1333 Non-ECC DIMM, $21.99. Savings, $12. (Yes, I realize the Motherboard also needs some extra circuitry, I expect it's less than $1 in quantity though).
Pretty much everyone I know values their data at more than $12 if it is lost.
The problem is that if you want to move past the 4GB modules, things can get expensive. Bearing in mind the subject line, consider for example the completely awesome Intel Sandy Bridge E3-1230 with a board like the Supermicro X9SCL+-F, which can be built into a low power system that idles around 45W if you're careful. Problem is, the 8GB modules tend to cost an arm and a leg; http://www.google.com/products/catalog?q=MEM-DR380L-CL01-EU13&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a&um=1&hl=en&bav=on.2,or.r_gc.r_pw.r_qf.,cf.osb&biw=1043&bih=976&ie=UTF-8&tbm=shop&cid=8556948603121267780&sa=X&ei=HxmMT5btB8_PgAfLs5TvCQ&ved=0CD8Q8wIwAA to outfit a machine with 32GB several months ago cost around *$400* per module, or $1600 for the machine, whereas the average cost for a 4GB module was only around $30. So then you start looking at the less expensive options. When the average going price for 8GB non-ECC modules is between $50 and $100, then you're "only" looking at a cost premium of $1200 for ECC. For $1200, I'm willing to at least consider non-ECC. You can infer from this message that I'm actually waiting for more reasonable ECC prices to show up; we're finally seeing somewhat more reasonable prices, but by that I mean "only" around $130/8GB. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Jimmy Hess wrote:
Consider that the probability 16GB of SDRAM experiences at least one single bit error at sea level, in a given 6 hour period exceeds 66% = 1 - (1 - 1.3e-12 * 6)^(16 * 2^30 * 8). In any given 24 hour period, the probability of at least one single bit error exceeds 98%. Assuming the memory is good and functioning correctly;
application in the effected space, and moderately important data is being damaged well, that's just plain uncool
Having limited knowledge of which consumer devices support ECC memory and which don't I was pleasantly surprised to find out the always on IBM thinkpad I ran for years refused to work with non-ECC memory. Greetings, Jeroen -- Earthquake Magnitude: 6.2 Date: Tuesday, April 17, 2012 19:03:55 UTC Location: east of the South Sandwich Islands Latitude: -59.0988; Longitude: -16.6928 Depth: 1.00 km
I've run a SheevaPlug at home for a few years now. I don't do anything fancy with it, but it does what I need it to do. Mostly that is file server, web server, jump-box for network testing. Also testing different linux software for this and that... (Quagga runs nicely, but won't hold a full BGP table :) There are no moving parts in my home computer/networking gear, unless my laptop is running. That was the goal for me. I recently grabbed a couple of TPLink WR703N devices to mess around with as well, but I haven't had a chance to dig into that much. The internet tells me that the Sheeva uses about 5 Watts of power. Along with my wireless router and DSL modem I might be under 10 Watts, but I really don't know how much power a wireless modem uses. Oh and I also have native IPv6 on my DSL. I like to brag about that whenever I can. Marcel On Wed, Feb 22, 2012 at 4:13 PM, Jeroen van Aart <jeroen@mompl.net> wrote:
After reading a number of threads where people list their huge and wasteful, but undoubtedly fun (and sometimes necessary?), home setups complete with dedicated rooms and aircos I felt inclined to ask who has attempted to make a really energy efficient setup?
This may be an interesting read, it uses a plugcomputer: http://www.theregister.co.uk/2010/11/11/diy_zero_energy_home_server/page2.ht...
Admittedly I don't have a need for a full blown home lab since I am not a network engineer, I'm more of a sysadmin/network admin/programmer kind of person... So I can make do with a somewhat minimal set up. But I *do* have tunneled IPv6 from home ;-)
In my current apartment in addition to an el cheapo DSL modem that probably wastes about 10 watts and a "sometimes on" PC workstation I used to have an always on thinkpad (early 2000s model) as my main desktop system and an always on G4 system (pegasos2 in case you care) acting as a mail/web/ssh server. The thinkpad was a refurbished model and it was quite stable, up to 500 days of uptime during its last years. But the hardware slowly disintegrated and when the gfx card died I retired it.
Right now my always on server is a VIA artigo 1100 pico-itx system (replacing the G4 system) and my "router/firewall/modem" is still the el cheapo DSL modem (which runs busybox by the way). I have an upgraded workstation that's "sometimes on", it has a mini itx form factor (AMD phenom2 CPU). I use debian on all systems.
I haven't measured it but I think if the set up would use 30 watts continuously (only taking the always on systems into account) it'd be a lot. Of course it'll spike when I fire up the workstation.
It's not extremely energy efficient but compared to some setups I read about it is. The next step would be to migrate to a plugcomputer or something similar (http://plugcomputer.org/).
Any suggestions and ideas appreciated of course. :-)
Thanks, Jeroen
-- Earthquake Magnitude: 3.0 Date: Wednesday, February 22, 2012 13:57:33 UTC Location: Island of Hawaii, Hawaii Latitude: 19.4252; Longitude: -155.3207 Depth: 3.90 km
Marcel Plug wrote:
I've run a SheevaPlug at home for a few years now. I don't do anything fancy with it, but it does what I need it to do. Mostly that
I wonder how reliable the storage is in these things. Is it comparable to modern SSDs?
Oh and I also have native IPv6 on my DSL. I like to brag about that whenever I can.
So your ISP delivers IPv6 to your home? I wish mine did... -- Earthquake Magnitude: 3.2 Date: Wednesday, February 22, 2012 22:00:06 UTC Location: Central Alaska Latitude: 62.0453; Longitude: -152.4945 Depth: 10.90 km
On Wed, Feb 22, 2012 at 5:43 PM, Jeroen van Aart <jeroen@mompl.net> wrote:
I wonder how reliable the storage is in these things. Is it comparable to modern SSDs?
No issues so far. As I said though, I don't push it too hard. I don't have any specs or stats off hand, so I can't get any more detailed. I use a SD card for extra storage, also seems to be working just fine.
Oh and I also have native IPv6 on my DSL. I like to brag about that whenever I can.
So your ISP delivers IPv6 to your home? I wish mine did...
I'm pretty happy with them, I just wish my DLink would stop requiring reboots...
Marcel
Marcel Plug wrote:
No issues so far. As I said though, I don't push it too hard. I don't have any specs or stats off hand, so I can't get any more detailed.
What's the speed like?
I'm pretty happy with them, I just wish my DLink would stop requiring reboots...
I assume you connected it to a (battery backed) UPS? If not doing so may help with that. Small fluctuations in power may cause just enough bitrot (http://www.catb.org/jargon/html/B/bit-rot.html) for it to behave funky but not all out fail. Greetings, Jeroen -- Earthquake Magnitude: 3.2 Date: Wednesday, February 22, 2012 22:00:06 UTC Location: Central Alaska Latitude: 62.0453; Longitude: -152.4945 Depth: 10.90 km
On Wednesday, February 22, 2012 04:13:47 PM Jeroen van Aart wrote:
Any suggestions and ideas appreciated of course. :-)
www.aleutia.com DC-powered everything, including a 12VDC LCD monitor. We're getting one of their D2 Pro dual core Atoms (they have other options for more money) for a solar powered telescope controller, and the specs look good. There is a whole market segment out there for the 'Mini ITX' crowd with DC power, low power budgets, and reasonable processors. Solid State drives have immensely.
In a message written on Wed, Feb 22, 2012 at 01:13:47PM -0800, Jeroen van Aart wrote:
After reading a number of threads where people list their huge and wasteful, but undoubtedly fun (and sometimes necessary?), home setups complete with dedicated rooms and aircos I felt inclined to ask who has attempted to make a really energy efficient setup?
I've spent a fair amount of time working on energy effiency at home. While I've had a rack at my house in the distant past, the cooling and power bill have always made me work at down sizing. Also, as time went by I became more obsessed with quite fans, or in particular fanless designs. I hate working in a room with fan noise. As others have pointed out, there are options these days. Finding a competent home router isn't hard, there are plenty of consumer, fanless devices that can be flashed with OpenWRT or DDWRT. I've also used a fanless ALIX PC running a unix OS, works great. Apple products like the Mini and Time Capsule are great off the shelf options for low power and fanless. Plenty of folks make low power home theater or car PC's as well. The area where I think work needs to be done is home file servers. Most of the low power computer options assume you also want a super-small case and a disk or two. Many Atom motherboards only have a pair of SATA ports, a rare couple have four ports. There seems to be this crazy assumption that if you need 5 disks you need mondo processor, and it's just not true. I need 5 disks for space, but if the box can pump it out at 100Mbps I'm more than happy for home use. It idles 99.99% of the time. I'd love a low powered motherboard with 6-8 SATA, and a case with perhaps 6 hot swap bays but designed for a low powered, fanless motherboard. IX Systems's FreeNAS Mini is the closest I've seen, but it tops out at 4 drives. But what's really missing is storage management. RAID5 (and similar) require all drives to be online all the time. I'd love an intelligent file system that could spin down drives when not in use, and even for many workloads spin up only a portion of the drives. It's easy to imagine a system with a small SSD and a pair of disks. Reads spin one disk. Writes go to that disk and the SSD until there are enough, which spins up the second drive and writes them out as a proper mirror. In a home file server drive motors, time you have 4-6 drives, eat most of the power. CPU's speed step down nicely, drives don't. The cloud is great for many things, but only if you have a local copy. I don't mind serving a web site I push from home out of the cloud, if my cloud provider dies I get another and push the same data. It seems like keeping that local copy safe, secure, and fed with electricty and cooling takes way more energy (people and electricty) than it should. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
On Thu, Feb 23, 2012 at 10:29 AM, Leo Bicknell <bicknell@ufp.org> wrote:
I'd love a low powered motherboard with 6-8 SATA, and a case with perhaps 6 hot swap bays but designed for a low powered, fanless motherboard. IX Systems's FreeNAS Mini is the closest I've seen, but it tops out at 4 drives.
Look at Supermicro's X7SPA-H. It's an Atom board with the ICH9R chipset, and 6 on-board SATA ports. That one has been out for a while, so there may be something newer available now too.
I've spent a fair amount of time working on energy effiency at home. While I've had a rack at my house in the distant past, the cooling and power bill have always made me work at down sizing. Also, as time went by I became more obsessed with quite fans, or in particular fanless designs. I hate working in a room with fan noise.
So, good group to ask, probably... anyone have suggestions for a low- noise, low-power GigE switch in the 24-port range ... managed, with SFP? That doesn't require constant rebooting? I'm sure I'll get laughed at for saying we like the Dell 5324. It's a competent switch that we've had good luck with for half a decade. The RPS is noisy as heck, though, and overall consumption is something like maybe 80 watts per switch (incl RPS).
The area where I think work needs to be done is home file servers. Most of the low power computer options assume you also want a super-small case and a disk or two. Many Atom motherboards only have a pair of SATA ports, a rare couple have four ports. There seems to be this crazy assumption that if you need 5 disks you need mondo processor, and it's just not true. I need 5 disks for space, but if the box can pump it out at 100Mbps I'm more than happy for home use. It idles 99.99% of the time.
I'd love a low powered motherboard with 6-8 SATA, and a case with perhaps 6 hot swap bays but designed for a low powered, fanless motherboard. IX Systems's FreeNAS Mini is the closest I've seen, but it tops out at 4 drives.
But what's really missing is storage management. RAID5 (and similar) require all drives to be online all the time. I'd love an intelligent file system that could spin down drives when not in use, and even for many workloads spin up only a portion of the drives. It's easy to imagine a system with a small SSD and a pair of disks. Reads spin one disk. Writes go to that disk and the SSD until there are enough, which spins up the second drive and writes them out as a proper mirror. In a home file server drive motors, time you have 4-6 drives, eat most of the power. CPU's speed step down nicely, drives don't.
FreeNAS can cope with ATA idle spindowns. You don't need to have all the drives spun up all the time. But it's a lot more dumb than it maybe could be. What do you consider a reasonable power budget to be?
The cloud is great for many things, but only if you have a local copy. I don't mind serving a web site I push from home out of the cloud, if my cloud provider dies I get another and push the same data. It seems like keeping that local copy safe, secure, and fed with electricty and cooling takes way more energy (people and electricty) than it should.
Quite frankly, and I'm going to get some flak for saying this I bet, I am very disappointed at how poorly the Internet community and related vendors have been at making useful software, hardware, and services that mere mortals can use that do not also marry them to some significant gotchas (or their own proprietary platforms and/or services). Part of the reason that people wish to outsource their problems is because it hasn't been made easy to handle them yourself. Look at e-mail service as just one example. What the average user wants is to be able to get and send e-mail. Think of how much effort it is to set up an e-mail system, with spam filtering, a web frontend, and all the other little things. I've been building e-mail services on the Internet for more than a quarter of a century, and as far as I can tell, it has not gotten easier - it's gotten worse. Most people just concede defeat without even trying at this point, point their domains at Gmail, and let someone else handle it. What about services like Flickr? We've completely failed at providing strategies for users to retain their pictures locally without putting them at risk. By that, I mean that Microsoft (for example) has made it nice and easy for users to pull their digital photos off their cameras, but has failed to impress upon users that their computers are not redundant or reliable, and then when a hard drive fails, years worth of pictures vanish in a moment. So that frustrates users, who then go to services like Flickr, upload their content there, and their data lies on a server somewhere, awaiting the day the business implodes, or gets T-Mo Sidekick'ed, or whatever. This frustrates me, seeing as how we've had so much time in which this stuff could have been made significantly more usable and useful... ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Thursday, February 23, 2012 04:53:06 PM Joe Greco wrote:
So, good group to ask, probably... anyone have suggestions for a low- noise, low-power GigE switch in the 24-port range ... managed, with SFP? That doesn't require constant rebooting?
I can't comment to the rebooting, but a couple of years ago I looked at the Allied-Telesis AT-9000-28SP, which is a smack steeply priced (~$1,500) but has flexible optics and is managed. And at ~35 watts is the lowest powered managed gigabit switch I was able to find for our solar powered telescopes. The grant that was going to fund that fell through, so I'm still running the 90W+ Catalyst 2900XL with two 1000Base-X modules and 24 10/100 ports instead, but the AT unit looked pretty good as a pretty much direct replacement with extra bandwidth.
I like the Juniper EX2200C switches. They are only 12-port, but have 2 SFPs. They are very low power, and have no fans. However, I am still waiting (it has been several months) for them to send me the correct rack mount brackets (which are a separate purchase). -Randy -- | Randy Carpenter | Vice President - IT Services | Red Hat Certified Engineer | First Network Group, Inc. | (800)578-6381, Opt. 1 ---- ----- Original Message -----
On Thursday, February 23, 2012 04:53:06 PM Joe Greco wrote:
So, good group to ask, probably... anyone have suggestions for a low- noise, low-power GigE switch in the 24-port range ... managed, with SFP? That doesn't require constant rebooting?
I can't comment to the rebooting, but a couple of years ago I looked at the Allied-Telesis AT-9000-28SP, which is a smack steeply priced (~$1,500) but has flexible optics and is managed. And at ~35 watts is the lowest powered managed gigabit switch I was able to find for our solar powered telescopes. The grant that was going to fund that fell through, so I'm still running the 90W+ Catalyst 2900XL with two 1000Base-X modules and 24 10/100 ports instead, but the AT unit looked pretty good as a pretty much direct replacement with extra bandwidth.
Leo Bicknell wrote:
But what's really missing is storage management. RAID5 (and similar) require all drives to be online all the time. I'd love an intelligent file system that could spin down drives when not in use, and even for many workloads spin up only a portion of the drives. It's easy to imagine a system with a small SSD and a pair of disks. Reads spin one disk. Writes go to that disk and the SSD until there are enough, which spins up the second drive and writes them out as a proper mirror. In a home file server drive motors, time you have 4-6 drives, eat most of the power. CPU's speed step down nicely, drives don't.
Late reply by me, but excellent points. A combination of mdadm and hdparm on linux should suffice to have a raid that will spin down the disks when not in use. I have used for years a G4 system with a mdadm raid1 (and a separate boot disk) and hdparm configured to spin the raid disks down after 10 minutes and it worked great. I think in a raid10 this would only spin up the disk pair that has the data you need, but leave the rest asleep. But I didn't try that yet. What I'd like is to have small disk enclosuer that includes a whole (low power) computer capable of having linux installed on some flash memory. Say you have an enclosure with space for 4 2.5 inch disks, install linux, set it up as a raid10, connect through USB to your computer for back up purposes. Greetings, Jeroen -- Earthquake Magnitude: 3.0 Date: Friday, April 13, 2012 17:45:06 UTC Location: Central Alaska Latitude: 64.0464; Longitude: -148.9850 Depth: 1.80 km
It exists. Google for "unRAID" It uses something like Raid4 for Parity data, but stores entire files on single spindles. It's designed for home media server type environments. This way, when you watch a video, only the drive you are using needs to power up. It also lets you add/remove mismatched disks with no rebuild needed. http://lime-technology.com/technology * Better power management: not all hard drives are required to be spinning in order to access data normally; hard drives not in use may be spun down. However modern "green" drives don't take that much power. On Fri, Apr 13, 2012 at 1:06 PM, Jeroen van Aart <jeroen@mompl.net> wrote:
Leo Bicknell wrote:
But what's really missing is storage management. RAID5 (and similar) require all drives to be online all the time. I'd love an intelligent file system that could spin down drives when not in use, and even for many workloads spin up only a portion of the drives. It's easy to imagine a system with a small SSD and a pair of disks. Reads spin one disk. Writes go to that disk and the SSD until there are enough, which spins up the second drive and writes them out as a proper mirror. In a home file server drive motors, time you have 4-6 drives, eat most of the power. CPU's speed step down nicely, drives don't.
Late reply by me, but excellent points.
A combination of mdadm and hdparm on linux should suffice to have a raid that will spin down the disks when not in use. I have used for years a G4 system with a mdadm raid1 (and a separate boot disk) and hdparm configured to spin the raid disks down after 10 minutes and it worked great.
I think in a raid10 this would only spin up the disk pair that has the data you need, but leave the rest asleep. But I didn't try that yet.
What I'd like is to have small disk enclosuer that includes a whole (low power) computer capable of having linux installed on some flash memory. Say you have an enclosure with space for 4 2.5 inch disks, install linux, set it up as a raid10, connect through USB to your computer for back up purposes.
Greetings, Jeroen
-- Earthquake Magnitude: 3.0 Date: Friday, April 13, 2012 17:45:06 UTC Location: Central Alaska Latitude: 64.0464; Longitude: -148.9850 Depth: 1.80 km
PC wrote:
It exists. Google for "unRAID" It uses something like Raid4 for Parity data, but stores entire files on single spindles. It's designed for home media server type environments. This way, when you watch a video, only the
There may be a performance penalty using raid4, because it uses one parity disk. Although that system looks like it can be useful for some purposes it looks less ideal for home use. Also I don't see how it would allow you to install your own OS. Regards, Jeroen -- Earthquake Magnitude: 4.7 Date: Friday, April 13, 2012 19:52:07 UTC Location: North Indian Ocean Latitude: 1.6006; Longitude: 91.2505 Depth: 16.80 km
Once upon a time, Jeroen van Aart <jeroen@mompl.net> said:
There may be a performance penalty using raid4, because it uses one parity disk. Although that system looks like it can be useful for some purposes it looks less ideal for home use. Also I don't see how it would allow you to install your own OS.
For read-mostly storage, there's no penalty as long as there's no disk failure. The parity drive wouldn't even spin up for reads. -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
With RAID 4, the parity disk IOPS on write will rate-limit the whole LUN... No big deal on a 4-drive LUN; terror on a 15-drive LUN... George William Herbert Sent from my iPhone On Apr 14, 2012, at 8:04, Chris Adams <cmadams@hiwaay.net> wrote:
Once upon a time, Jeroen van Aart <jeroen@mompl.net> said:
There may be a performance penalty using raid4, because it uses one parity disk. Although that system looks like it can be useful for some purposes it looks less ideal for home use. Also I don't see how it would allow you to install your own OS.
For read-mostly storage, there's no penalty as long as there's no disk failure. The parity drive wouldn't even spin up for reads. -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
Have you looked at the HP ProLiant MicroServer? Cheers, Henk On 13-04-12 12:06, Jeroen van Aart wrote:
Leo Bicknell wrote:
But what's really missing is storage management. RAID5 (and similar) require all drives to be online all the time. I'd love an intelligent file system that could spin down drives when not in use, and even for many workloads spin up only a portion of the drives. It's easy to imagine a system with a small SSD and a pair of disks. Reads spin one disk. Writes go to that disk and the SSD until there are enough, which spins up the second drive and writes them out as a proper mirror. In a home file server drive motors, time you have 4-6 drives, eat most of the power. CPU's speed step down nicely, drives don't.
Late reply by me, but excellent points.
A combination of mdadm and hdparm on linux should suffice to have a raid that will spin down the disks when not in use. I have used for years a G4 system with a mdadm raid1 (and a separate boot disk) and hdparm configured to spin the raid disks down after 10 minutes and it worked great.
I think in a raid10 this would only spin up the disk pair that has the data you need, but leave the rest asleep. But I didn't try that yet.
What I'd like is to have small disk enclosuer that includes a whole (low power) computer capable of having linux installed on some flash memory. Say you have an enclosure with space for 4 2.5 inch disks, install linux, set it up as a raid10, connect through USB to your computer for back up purposes.
Greetings, Jeroen
On Mon, Apr 16, 2012 at 11:22:20AM -0700, Henk Hesselink wrote:
Have you looked at the HP ProLiant MicroServer?
Notice it takes up to 8 GByte ECC memory and supports zfs via napp-it/Illumos. A hacked BIOS was required to use the 5th internal SATA port in AHCI mode, maybe that's no longer necessary with N40L.
On Mon, Apr 16, 2012 at 11:22:20AM -0700, Henk Hesselink wrote:
Have you looked at the HP ProLiant MicroServer?
Notice it takes up to 8 GByte ECC memory and supports zfs via napp-it/Illumos. A hacked BIOS was required to use the 5th internal SATA port in AHCI mode, maybe that's no longer necessary with N40L.
The MicroServer is actually a nice little platform, one little bright spot in the small-home-server market. It does have some other issues though: 1) It's not particularly low-power, as in, I managed to build some Xeon based systems that run rings around it for only maybe a dozen watts more, and some of the NAShead guys over at one of the Linux based projects have a similar but lower-power platform for a lower price, 2) While it has a remote management card available, it's known to not work with certain things, including FreeBSD, 3) Various problems noted with the eSATA port, such as the inability to use an external port multiplier. On the flip side, some people have tossed one of those 4-2.5"-in-a-5.25" bay racks into the optical bay, along with a PCI controller, to allow the addition of SSD's or whatever for NAS use. Pretty cool and the thing *is* pretty compact. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
participants (24)
-
Andrew Wentzell
-
Charles Morris
-
Chris Adams
-
Daniel Staal
-
Douglas Otis
-
Eugen Leitl
-
George Herbert
-
george hill
-
Henk Hesselink
-
Jamie Bowden
-
Jeroen van Aart
-
Jimmy Hess
-
Joe Greco
-
Lamar Owen
-
Laurent GUERBY
-
Leigh Porter
-
Leo Bicknell
-
Luke S. Crawford
-
Marcel Plug
-
PC
-
Randy Carpenter
-
Stefan Bethke
-
Steven Bellovin
-
Valdis.Kletnieks@vt.edu