Greetings, Do we really need them to be swappable at that point? The reason we swap HDD's (if we do) is because they are rotational, and mechanical things break. Do we swap CPUs and memory hot? Do we even replace memory on a server that's gone bad, or just pull the whole thing during the periodic "dead body collection" and replace it? Might it not be more efficient (and space saving) to just add 20% more storage to a server than the design goal, and let the software use the extra space to keep running when an SSD fails? When the overall storage falls below tolerance, the unit is dead. I think we will soon need to (if we aren't already) stop thinking about individual components as FRUs. The server (or rack, or container) is the FRU. Christopher On 9 May 2015, at 12:26, Eugeniu Patrascu wrote:
On Sat, May 9, 2015 at 9:55 PM, Barry Shein <bzs@world.std.com> wrote:
On May 9, 2015 at 00:24 charles@thefnf.org (charles@thefnf.org) wrote:
So I just crunched the numbers. How many pies could I cram in a rack?
For another list I just estimated how many M.2 SSD modules one could cram into a 3.5" disk case. Around 40 w/ some room to spare (assuming heat and connection routing aren't problems), at 500GB/each that's 20TB in a standard 3.5" case.
It's getting weird out there.
I think the next logical step in servers would be to remove the traditional hard drive cages and put SSD module slots that can be hot swapped. Imagine inserting small SSD modules on the front side of the servers and directly connect them via PCIe to the motherboard. No more bottlenecks and a software RAID of some sorts would actually make a lot more sense than the current controller based solutions.
-- 李柯睿 Avt tace, avt loqvere meliora silentio Check my PGP key here: http://www.asgaard.org/cdl/cdl.asc Current vCard here: http://www.asgaard.org/cdl/cdl.vcf keybase: https://keybase.io/liljenstolpe