In August 2010 outside of Beijing, China, a truck carrying construction equipment into the city slowed to a crawl behind a line of traffic ahead. Within a few hours, there were cars and trucks bumper-to-bumper for 62 miles behind him, stopped in a traffic jam that lasted 12 days. Why did this happen? Because the number of vehicles on the road (5000 new ones each day in Beijing alone) is growing exponentially faster than the capacity to accommodate them.
Demand is outstripping capacity on roads all over the developing world, and it’s simply not sustainable. To get everyone from point A to B without creating gridlock and destroying the environment, transportation has to become more efficient. There are a number of ways to accomplish this: more and better mass transportation; more and better roads; more fuel-efficient, cleaner vehicles; and carpooling. We’ll need a combination of those solutions to accommodate everyone who wants to travel by vehicle, and do it in a way that is sustainable for the environment.
The same principle applies to the data center. Because of the rise of cloud computing, consumerization of IT, Big Data, and mobility, demand for data transmission, processing, and storage is increasing exponentially. In order to meet that demand sustainably, we’ve got to make data centers more efficient in numerous ways– from how they are manufactured to how they are powered to how they are managed.
Consider the position of many traditional data centers (Data Center 1.0) today: scaling is difficult because data centers are built to accommodate potential future demand – meaning they’re overbuilt for today. Because there’s no way to measure, monitor, or manage operations, Data Center 1.0 is operated at peak load 24-7. And because all applications are provisioned at a single, static service level, they’re powered and cooled to the highest level of demand.
That’s bad for the environment because it wastes energy. And it’s bad for customers. Many end up paying more than they should, thanks to guesswork in allocating costs. Finally, it’s bad for society, because with 1.0, we will never be able to fully, sustainably support the global need for data center capacity.
Data Center 2.0, in contrast, is designed for sustainability. At IO, efficiency is inherent in our definition of Data Center 2.0:
- Instead of all applications being provisioned at a single, static service level, Data Center 2.0 can allocate power, cooling, and space dynamically, based on each individual application’s needs.
- Instead of the data center operating at peak load all day, Data Center 2.0 can be optimized to reduce energy consumption and cost during non-peak demand.
- Instead of wasting time, money, energy, and resources, the Data Center 2.0 data center provides visibility and control to optimize operational and energy efficiency.
As IO Lead Sustainability Strategist Patrick Flynn explains, “A compartmentalized or modular data center means for the first time you can have different levels of redundancy in the same data center. Not only that, but we can change levels over time. The IO data center operating system allows you to dynamically change the set points of your data center in real time based on real-time feedback. So maybe we want to spin down the fans and let the temperature rise in the modules. Or maybe we want to push workloads when the utility is strained and rates are high. What we are promoting is a more intelligent outlook towards data center usage.”
Stay tuned in coming days for the release of findings of a third-party evaluation by Arizona Public Service comparing the energy efficiency of IO’s manufactured, modular approach to the traditional raised-floor data center environment.