Do-it-yourself server hardware means manufacturers don't get it by Doug Mohney
With Amazon, AOL, Google and Facebook building their own server hardware and even low-profile shops (i.e. little fish in the sea) like Backblaze are starting to bolt together their own hardware to save money and energy says one thing: Major hardware manufacturers are lazy and too enthralled by Intel calling the shots for so long. This failure to innovate is hurting everyone -- including the environment.
Facebook and Google started the trend of do-it-yourself (DIY) server hardware as the companies realized they needed to pack more computing power into a rack footprint, along with the ability to efficiently power and cool all servers and network hardware necessary to run a large scale data center.
Google has kept a tight lid on how it has done so, other than to occasionally dribble out pretty pictures and little tidbits as it lowers its PUE performance across all of its data centers from around 1.23 at the end of 2008 to around 1.11 at the end of 2012. The Goog considers how it networks and builds its servers to be a proprietary advantage, as it build its own servers and custom routers, and even flirting with designing its own chips. One Google official went so far as to say that they want others to "spend their own blood, seat, and tears" making the same efficiency discoveries the company has over the years.
Amazon and AOL have followed Google into the DIY server world. A December 2012 Wired piece says there's a whole "shadow market" of large data center builders who are going directly to Asia to get motherboards, sourcing processors and memory directly from Intel. Switches and other networking gear are also being purchased direct, leaving Cisco and others out of the loop.
Facebook has, fortunately, started the Open Compute Project and has been more than willing to release and share designs for servers, storage, rack and data center power and cooling infrastructure. The Open Compute project has grown to offer regular summits of presentations and papers, with the first international event in Hong Kong taking place in a week.
Which brings me to the little storage company that did: Backblaze. In 2009, the online backup company published its first specs for the "Storage Pod 1.0," a 67 terabyte 4U server coming in at a cost of $7,867. Backblaze Storage Pod 3.0 is now out on the street, with the ability to storage up to 180 TB of storage, and at lower server hardware cost before you install drives.
Sooo... why aren't traditional hardware manufacturers stepping up to the plate?
Microsoft sent its shot across the bow when it rolled out the Surface Tablet, building its own ultrabook -- and let's face it, that's what the high-end Windows 8 running "Tablet", sans attached keyboard and larger battery. Admittedly, user devices aren't the same as server hardware -- if anything, they are EASIER to design, since they don't require a pretty designer metal case and a touch screen.
Ultimately, someone like Amazon -- and I'd bet on Amazon, just because they are built to sell things -- will start selling their "custom" server hardware to the enterprise world. Buying more servers means it can drive down cost per server for its own operations and make money since the R&D costs are already sunk on a device the company already uses and supports.
Anyone running a data center could be a big winner if they can get their hands on high-performance "green" server hardware tested and running by one of the Big Name, since they should get something cheaper to buy, more power efficient, and easier to cool than traditional hardware. The only losers are traditional manufacturers who haven't figured out they need to step up their design game in a big way.