Patrick Flynn, IO’s head of Applied Intelligence talks about using performance data to improve the data center’s ability to support software, application-centric infrastructure, and the software-defined data center.
Interviewer:Let’s talk about discovery. What are you discovering?Flynn: We’ve got an amazing amount of reliable data to explore. I’m reminded of a conversation I had recently with one of my former MIT professors. I was describing to him what we’re up to here, and he said that sociologists have studied the Medici family because they’ve got extensive records of how the family evolved and the personality types. And that when you examine those records, you find out deep truths about how families interact, change, evolve, how power changes within the family.
We’ve got a similar situation here. We’ve got a good, stable, standardized set of data around data centers. We’ve already got the equivalent of over two centuries of machine run-time data. So we’ve got a chance to discover deep truths that are then applicable universally across data centers.
What we’re discovering is how to streamline the information system, the system that transforms energy into information delivered through software applications to users and customers. Anywhere you find it.
How we support those software applications with digital infrastructure is critically important. We are discovering how to more elegantly, more efficiently support software.
Interviewer: So what?Flynn: The “so what” is most easily translated to dollars. The way software is supported with digital infrastructure is inefficient almost everywhere you look. Part of that is miscommunication, misaligned incentives, lack of visibility leading to intense risk aversion. As we introduce visibility through the IO.OS data center operating system, as we standardize and then control performance through IO.Anywhere modules, we can mine the data that comes from the two of those, we can mitigate risk. We can start to understand end-to-end performance of the information system and really strip out all of that inefficiency you see when multiple people with multiple incentives are trying to solve a problem.
Interviewer: One of the things we talk about at IO is application–centric infrastructure.Talk about that.Flynn: In a manufacturing sense this is like switching from a push model to a pull model. The application is the demand. The application is what triggers the need for data services or data center upstream, and as application demand grows the capacity needs to grow to meet that. That’s fairly easy, right, but as application demands change during the work day and the work week, the infrastructure needs to change as well. I think that’s where things get really interesting.
The ability to change infrastructure to meet changing demands, and then the ability to connect demand to supply in an elegant way, is the idea around application centric or applications driving the data center.
Interviewer: Aren’t lots of people doing that, though?Flynn: Well, the industry is moving towards a software-defined data center, but nobody else is coming at that challenge from the data center upwards.
What supports the upper levels of the IT stack needs to be capable of providing real
time information, capable of being abstracted and then optimized. Those that are trying to create it from the top down, it’s almost like castles built on sand.
We have the bedrock. We’ve got the data center. Not only do we have a large footprint, but we’ve got a standardized footprint, a footprint that’s equipped with sensors. Sensors providing the right information to the layers above to optimize the data center.
Interviewer: What do you mean by abstraction? Flynn: Take a car sharing program. Not everybody needs to own a car, as we see with Uber, Lyft, Zipcar. I don’t own a car. I rent one, or I take a taxi, all done through my iPhone. This is an abstraction of cars as an end value called “transportation,” and software enables the pooling of car infrastructure and the delivery of transportation to me. When you have that happening where people can all leverage the same infrastructure for our transportation needs then you get this immense compression. We probably only need one-one thousandth or less of the vehicles actually produced in a given year.
Abstraction comes when you think less about physical goods and more about the value output, so in the car example you’re not thinking about a car. You’re thinking about transportation. When you think about transportation as the goal, then cars become less important. You start to think about the end state, what we’re really trying to do and how to elegantly meet that need.
Similarly, when you’re not thinking about “I need a data center,” but instead “I need data infrastructure,” or “I need data services, I need somebody to store, compute, provide the necessary switching or transportation for my data,” when that’s your end goal, then all of a sudden you’ve got the same level of opportunity to provide an elegant solution to the customer.
What we can do through the cloud, for example, is have many users leveraging the same physical infrastructure. Because they won’t be using it all at the same time, because we can allocate demand across the infrastructure intelligently, we have the flexibility to really create a better service, create the platform for data needs in a more elegant way.
IO.Applied Intelligence discussion panel