In his post several weeks ago, IT Delivery Revolution: Welcome to the Enterprise Cloud, Troy Rutman talked about how widespread adoption requires that the Enterprise Cloud be, among other things, open. That means built on an open reference architecture consisting of Open Compute hardware and OpenStack software. Why? The openness of Enterprise Cloud reduces overall operating expenses; avoids lock-in from large technology vendors; reduces or eliminates data ingress/egress fees; and lets customers choose their own service brokers and network carriers.
Open means options.
The Skinny on Open Architecture
The first Open Compute hardware was developed by Facebook in 2011. According to Open Compute Project: “Working out of an electronics lab in the basement of our Palo Alto, California headquarters, the team designed our first data center from the ground up; a few months later we started building it in Prineville, Oregon. The project, which started out with three people, resulted in us building our own custom-designed servers, power supplies, server racks and battery backup systems. Because we started with a clean slate, we had total control over every part of the system, from the software to the servers to the data center.” Today, access to Open Compute hardware specifications is completely open, and there are many companies manufacturing vanity-free hardware according to those specs.
OpenStack software was first developed by NASA and has since grown to be “a global software community of developers collaborating on a standard and massively scalable open source cloud operating system,” according to OpenStack.org. All of the code for OpenStack is freely available under the Apache 2.0 license. “Anyone can run it, build on it, or submit changes back to the project. We strongly believe that an open development model is the only way to foster badly-needed cloud standards, remove the fear of proprietary lock-in for cloud customers, and create a large ecosystem that spans cloud providers.”
The community of developers contributing to OpenStack includes technologists, developers, researchers, corporations, and cloud computing experts – at last count, 13,616 individuals in 131 countries. In many ways, this open Cloud development model is similar to the Linux open source operating system movement, which provided both strong support and options for companies looking to move away from the dominant operating system (somewhere in Redmond, WA). The development of Linux has been called “one of the most prominent examples of free and open source software collaboration. In much the same way, OpenStack provides support and options for companies looking to move away from lock-in to proprietary software.
What IO call “Enterprise Cloud” (read our definition here) brings together Open Compute hardware and OpenStack software. We believe IO is the only provider that has brought to market an integrated Open Compute/OpenStack solution.
Why Choose Open?
An open Enterprise Cloud offers a number of advantages over cloud that runs on custom-built, branded hardware and proprietary software. Here, we highlight five of those advantages:
1. More flexibility – In a recent article in American Banker, David Reilly, MD, Technology Infrastructure at Bank of America, explained why the world’s largest retail bank is moving toward an open cloud platform. One reason is flexibility. “[Open] networks are more flexible, at least in theory, than traditional networks. Users can make their own updates without waiting for their vendor. They can bring in high-performance technology without having to change infrastructure in which they have already invested.”
2. No lock-in – In addition to the flexibility that open Enterprise Cloud enables, it also lowers costs and reduces risk by eliminating technology lock-in. Again, David Reilly from Bank of America: “We believe that the proprietary lock-in is not the way to go. We should work with you as a partner because you have the best service, the best price, the best capability, the lowest risk, not because it’s impossible to get off you once we’re on.”
3. More visibility and control – When the software-defined data center at the foundation of the Enterprise Cloud is run by a true data center operating system, that OS – combined with the OpenStack dashboard – gives the enterprise full visibility into and control of the cloud stack from its top to its bottom. From the data center infrastructure… to the physical hardware using Open Compute… to platform services using OpenStack… the enterprise can see and control where their data lives.
Visibility and control is critical from a security perspective as well. As Troy Rutman wrote in a post last week (Cloud Security – 7 Deal Breakers for the Enterprise CSO), “Open architecture mitigates the risk that the enterprise will get locked into expensive proprietary infrastructure that reduces predictability by keeping the manufacturer in control.”
4. Greater efficiency that is critical for those deploying scale cloud environments, – In the cloud, a significant (perhaps the most significant) cost of running a virtual machine is the cost of energy… and Open Compute infrastructure is more efficient. According to Open Compute Project: “The Open Compute server’s vanity-free design eliminates nearly 6 pounds of material per server, reducing the amount of materials that need to be produced, transported, assembled, and eventually, disassembled. ‘Designing out,’ or excluding, all non-essential features and non-relevant elements from the Open Compute servers allows for a custom chassis that minimizes the overall part count, accelerates assembly, and removes elements like a front panel, paint, and logos. Additionally, Open Compute servers can operate in a higher-temperature environment, reducing the overall cooling load required in a data center.”
Energy efficiency is important everywhere, but especially in places like Europe and parts of Asia, where governments heavily tax big energy consumers. In Singapore, for example, energy costs average $0.20-0.30 per kWh – two to three times what is often considered a high cost in the U.S. In these environments, maximizing energy efficiency is critical.
5. Optimization – As IO CEO and Product Architect George Slessman explained in his presentation at the Gartner Data Center conference in December, “With open and standardized infrastructure across the entire system, we can generate tremendous efficiency by taking advantage of intelligence gleaned, of predictive modeling, and infusing lessons learned into the next iteration. For most cloud providers, that’s simply not possible because their data center is filled with custom architecture. It’s hard to optimize efficiency, reliability, agility, and sustainability when everything is different.
As Henry Ford espoused in 1908, when you don’t have to figure out how to build something every time, you can instead improve the process.
At the end of the day, all of those benefits confer upon the enterprise the agility necessary to scale… based on the specific needs of applications… and without the time and expense of building-IT-yourself. And that frees people to collaborate on business needs, rather than on worrying about infrastructure.