“Despite the concerns and complexities surrounding the adoption of Big
Data, doing nothing is not an option for most institutions. Institutions that leverage Big Data to gain insights into their operations, customers, and market opportunities can position themselves for ongoing success. But transforming Big Data into actionable insights requires sophisticated analytics tools.” – PwC
If data is the lifeblood of successful companies, the data centre is the heart. As my colleague wrote last year on this blog, “The ability to gather, analyze, and beneﬁt from the treasure trove of data that’s generated every second depends on the digital infrastructure in which that data lives. And the systems that support the digital infrastructure. Big data depends on the data center.”
But what factors determine whether a data centre can support a business trying to turn big data into actionable insights? There are, broadly, five factors.
90 percent of the data that exists in the world today has been created in the last two years alone.
In this kind of environment – in which the demand for compute capacity is growing exponentially – successful firms need to be able to scale, fast.
But in many data centres, scalability is limited by space, power, or both. In response to that future risk, some organisations over-provision – they buy more data centre space than they need today in anticipation of need tomorrow. That strategy mitigates risk, but it also wastes – energy, and money.
In a truly scalable data centre, firms don’t have to over-provision because they can scale, vertically and horizontally, anytime:
- The scalable data centre is a mixed-density environment (the modular data centre supports low-, medium- and high-densities) so organisations can increase and decrease density within the same modular footprint. A module can support densities 5-8 times greater than traditional raised-floor, so there’s 5-8 times more room to scale.
- The scalable data centre provider can deploy new capacity quickly, enabling horizontal scale (i.e., expanding the data centre footprint). In months, not years.
In addition, the scalable data centre enables capacity planning with integrated data centre infrastructure management (DCIM) software, backed by robust analytics, which helps organisations predict when they will need additional capacity.
Learn more about the scalable data centre in the Solution Guide: How the IO data center solution delivers truly scalable capacity.
The number of security incidents detected by respondents to PwC’s Global State of Information Security® Survey climbed to 42.8 million in 2014, an increase of 48 percent over 2013. That’s the equivalent of 117,339 incoming attacks per day, every day. At an average cost of $2.7 million.
To counter that security risk, the secure data centre delivers higher levels of physical and logical security. Compartmentalised steel “vaults” (i.e., data centre modules) provide unmatched physical protection and segregation. Data centre infrastructure management (DCIM) software enables organisations to proactively detect and mitigate threats.
Given how reliant business organisations are on data, and the data centre, it is no surprise that downtime is incredibly costly. A study by the Ponemon Institute found that the average cost of a data centre downtime incident was $690,200 (£446,000), or $7,900 (£5,100) per minute. The cause? Most often, it’s human error.
The reliable data centre reduces downtime risk with proven best-in-class operators who rely on advanced data centre technology and DCIM for visibility and control. 451 Research makes the point well: “It is clear that the most adaptable, economically sustainable and best-managed data centres will be those where managers have accurate and meaningful information about their data centre’s assets, resource use and status.”
At IO, for example, each data centre module generates 700-1,000 different data points that allow for monitoring and control of operations including ambient conditions, power use and power quality, and auxiliary systems such as security and life safety. Elsewhere in the data center, the DCIM monitors the chiller, generator, switchboards and uninterruptible power supplies (UPSs).
For organisations looking to achieve environmental sustainability, and cost savings, the modular data centre is proven effective. In a yearlong, side-by-side comparison of a traditional raised-floor data centre and a modular data centre in Phoenix, the modular data centre was found to have a PUE (power usage efficiency) of 1.41, significantly better than the 1.73 PUE of the traditional data centre. That PUE difference yielded a 19 percent reduction in energy costs, which translated to an annual savings of over $220,000 (£142,200) per MW of IT power load. (Read about that study here.)
5. Global footprint
Business needs are pushing organisations outward whilst IT needs pull them inward. For example, pursuit of growth is driving enterprises into farther-flung, lesser-developed regions and countries. At the same time, data sovereignty, latency considerations, and regulatory requirements are driving a need to keep data close to its home.
In response, organisations could colocate with a local data centre provider in every region. But more providers means more complexity. And for organisations without a DCIM tool to provide visibility into data centre operations across the entire footprint, managing infrastructure around the world becomes much more complex as well.
The solution, then, is a provider with data centres around the world – in essence, a one-stop shop that allows organisations to expand their data centre footprints globally without the added complexity.
Facing a data deluge, doing nothing is not an option for nearly any firm. That means figuring out how to harness the power of Big Data to generate new insights, more quickly. And it means finding a data centre provider to support it. One that delivers scalability, security, reliability, and sustainability – around the world.
Nigel Stevens is Managing Director for IO in the United Kingdom. Learn more about the UK data centre.