AOL talks of its Micro Data Center by Doug Mohney
Last week, AOL's CTO blogged about the company's data center refresh, spinning it as "AOL's Data Center Independence Day." I have no doubt smaller is better, but when you start gushing about how your new stuff is "disruptive" and "game changing," it's time to take a deep breath.
CTO Michael Manos posted almost 2,000 words on July 5 on the Nibiru project -- 1,500 more words than most people will read in the Twitter/Facebook area. Nibiru, named after a mythical planet that allegedly crosses into our solar system to wreak havoc and bring great change, is this whole bundle of organizational and technologies to revamp the data center.
Manos says the primary item on his wish list, The Micro Data Center (MDC), arrived on July 4 -- an interesting claim since nearly everyone in the United States of America with any sort of life was on holiday, including UPS and FedEx if I'm not mistaken.
Anyway, the MDC is designed to be resilient, close to the end user, cost effective, and reside in a very small box, maybe two cabinets/racks big. You should be able to put the MDC anywhere on the planet regardless of temperature and humidity settings; remotely support, maintain and administer it; put it into the "power envelope" of a normal office building, plug into AOL's cloud; and it should be able to deliver "extremely dense" compute capabilities.
With the MDC, AOL is no longer tied to traditional data center facilities or colocation markets, according to Manos. That's not to say AOL won't (and will continue to) use the more traditional places to put servers, but it now has a choice to go other places. MDC will allow AOL to have five times the amount of total compute capability (TCC) in less than 10 percent of the cost and physical footprint, so future growth and replacement of legacy gear should in theory drive the company to a very low operational cost structure for delivering products and services.
Since you've got a plug-and-play box you can drop anywhere, you can use it to deliver services locally, customize the platform to meet privacy and regulatory concerns on a country-by-country basis, and allows AOL to quickly scale up into new markets by just turning up another MDC. It also has the potential to allow AOL to bypass traditional CDNs (and their expenses) for certain types of content.
In theory, the biggest overhead in the MDC scheme is in property management. Remembering where you put all your boxes around the world might be a bit of a challenge, especially if you're being a little bit professionally paranoid and distributing boxes between AOL-owned office properties, traditional data centers and colocation facilities as part of a wider strategy of putting out as much power on the "edge" of the network as possible.
At some point, it becomes more effective to cluster one or more MDCs even if you are treating them as "lights out" facilities not requiring remote-hands services. A smaller footprint means you can power an MDC from green energy sources, but it is unlikely every MDC is going to end up in a rooftop shed with solar cells and a windmill.
I'd like to see more data and specs on the MDC. Is it an intermediate step to the Facebook idea of deconstructuing the data center server/rack/switch model? Or just a repackaging of a bunch of servers into a customized lockable cabinet with a couple of GigE ports and some blinky status lights?