IO Blog
As in residential and commercial real estate, location plays a crucial role in creating data center value. Beyond the usual considerations that go into selecting a data center location—such as land costs, access to utilities, the local political climate and an area’s vulnerability to floods, fire, earthquakes and other types of disasters—lies the far from trivial matter of network requirements. Given the importance of fast, responsive and reliable data communications in today’s business world, network service is often the overriding consideration when choosing a data center site.

Network service means different things to different enterprises. An organization that’s primarily interested in giving end-users access to enterprise resource planning (ERP) applications, for instance, can generally locate its data center almost anywhere, since latency and other time-critical network factors usually aren’t a major concern. The same isn’t true, however, for enterprises that literally stake their commercial existence on applications that demand lightning fast response times, such as the tools used by financial traders. These applications typically demand network latency levels of 5 milliseconds or less, a rate that isn’t generally achievable if the traders are located something like 2,500 miles away from the application and its data.
Traditionally, latency concerns have been addressed—usually not completely satisfactorily—with network optimization practices, such as more efficient routing techniques. But a more practical solution, one that takes advantage of new data center design techniques, is for enterprises to simply open a secondary site that’s located closer to their latency dependent end users, such as employees, customers or important business partners.
A New Approach
In the days when nearly all enterprises operated a single data center located inside a central headquarters building or another nearby brick and mortar structure, the concept of location flexibility barely existed. Since a single data center must serve many masters, it didn’t make sense to relocate an entire facility just to help some users, particularly if others would be negatively affected by the switch.
Today, thanks to the rising popularity of data center outsourcing, and particularly modular data center offerings like i/o ANYWHERE™, enterprises can easily deploy one or more secondary data centers and, by placing the modular structures in strategic locations, significantly improve latency rates to end users located with the service radius of the new sites. When approached as a service, a secondary data center can be rolled out quickly (often within a matter of weeks) at a cost that’s usually only a fraction of the expense of building a traditional data center. Best of all, the modular secondary facility can be placed almost anywhere, allowing the enterprise to fully maximize its coverage radius.
Finance industry players aren’t the only organizations that can benefit from secondary data centers. Any enterprise with a business-critical application that requires low latency can benefit from the approach. Organizations that have built their businesses on the Web—including content distributors, social media networks, travel reservation services, consumer financial services and online retailers—all need to provide fast, responsive and reliable service to users who may be scattered around a continent or the perhaps even around the entire world. Sagging response times and stuttering streams caused by latency raises the likelihood of frustrated users and lost revenues. A secondary data center effectively and affordably solves these problems.
When planning a secondary data center, it’s important to ensure that the new facility includes a full complement of integrated network systems that provide access to an array of telecommunications carriers. i/o ANYWHERE™, for example, includes direct fiber backbone connectivity to AboveNet, AT&T, Level3, Qwest and TeliaSonera. It also makes sense to turn to a provider that uses enterprise-grade networking equipment that provides fault-tolerant network connectivity. Other features to look for include an integrated power/cooling infrastructure, remote support, management services, easy expandability, an ability to support any type of hardware and a 100% uptime service level agreement (SLA).
Final point
Years ago, most businesses were able to meet their IT needs with just a single mainframe computer. Yet over time it was discovered that a decentralized approach using multiple network servers not only created greater efficiencies, but opened doors to using computer technology in new and innovative ways.
Today, the data center is undergoing a similar revolution. For a growing number of enterprises, having multiple data centers located close to end users not only creates a foundation for more responsive services, but enables the adoption of profitable new business practices and revenue streams.
