This blog post is culled from a roundtable conversation I conducted with Geoff McGrath, Managing Director, McLaren Applied Technologies at a recent CIO forum in San Francisco. The purpose of the conversation was to introduce IO.Applied Intelligence, bringing into focus why McLaren, a manufacturer of race cars, and IO have partnered to make data centers more efficient.
McLaren Applied Technologies is about optimizing performance
McLaren has been making race cars since 1989. Over time, they’ve become very good specifically at designing the fastest cars in the world, and more generally at gathering data, turning it into information, building models, and simulating how different variables will affect the outcome. Through McLaren Applied Technologies (MAT), they’ve taken that understanding of how to use data to optimize performance, and are applying it in a range of other industries.
For example, in the United Kingdom, MAT is working with Olympic athletes to simulate and then optimize their performance. Geoff explained: “We’ve started instrumenting athletes to build models, for example, of how a cyclist, his bicycle, and the road surface interact. We use what we learn from the models to fine-tune design of the bicycle and the athlete’s strategy – to optimize the system. We will be the first to use predictive analytics to anticipate performance of athletes before the race.”
Companies who understand how to leverage data to optimize performance and can do the same with their own data have a significant competitive advantage.
IO is uniquely equipped to optimize performance in the data center
IO has partnered with McLaren Applied Technologies to share best practices about measuring, analysis, modeling, and testing to optimize performance. What is it about Data Center 2.0 that makes it ripe for this sort of analytics? It’s partly about the standardized factory production of modular data centers, and partly about the instrumentation that gathers the data in the first place.
Almost everything we consume today has been mass-produced, in a factory, according to tested, proven specifications. Why? Because standardization allows manufacturers to make things better, faster, and more cost-effectively. It’s a concept that Henry Ford understood in 1908, a concept that made the automobile (in any color you choose, as long as it’s black) accessible to millions.
Traditionally, that concept has not been applied to the data center, which is typically built using custom building-centric design and construction practices that are hugely inefficient. At IO, we’ve taken the data center from a custom construction project to a sdtandardized factory-built product. That allows us huge process gains. Instead of having to reinvent the wheel every time we build a data center, we can spend our efforts on new ways to make the data center better (continuous innovation).
And because turning data into information into insight requires gathering as much data as possible, we’ve outfitted those modular data centers with nearly 100,000 sensors (to date). Because the modules are standardized, we can benchmark them against each other to test how different environments affect the same infrastructure. That will allow us to produce better modules and help our clients improve their data center performance. Across those 100,000 sensors and 1.4 million operating hours, we have 30 billion rows of data center operations data – just waiting for us to tap it. (That’s where the fun really begins!)
Tapping that treasure trove of data is exactly what we’re doing now, taking lessons learned and best practices from McLaren Applied Technologies. Our first challenge: data center capacity. Boosting utilization in the data center is a great opportunity for efficiency. Energy overhead is fairly fixed, so if I can produce more useful work, I can get a thinner spreading of energy overhead – in other words, higher utilization equals greater energy efficiency.
Many customers today don’t push the limits of data center capacity – in part because their data centers are designed for worst-case scenarios and also because of a lack of visibility into their data center operations. IO.OS offers that visibility, and IO.Applied Intelligence will create the analytic models to provide decision making support.
For example, we’re modeling power consumption to individual branches, using machine learning algorithms to understand that behavior. Then we can provide decision making support, telling the customer, for example “You can continue growing into this module” or “In three months there’s a 10% change that this branch will experience a breach of capacity. But you have an adjacent branch with idle servers where you can reallocate your IT gear.”
In many industries, competitive edge relies on getting information faster than the competition and on doing more with less. In the company with a data center that is continually innovating, learning from itself, the data-driven innovation cycle becomes the strategic advantage. So for the CIO, using the wealth of data within your data center to develop insight about your operations (and your customers) and turn that insight into action is an ideal way to take IT from a liability to an invaluable asset.