In digital business, speed is critical to gaining competitive advantage. That’s why rapid software development concepts such as DevOps and Continuous Delivery are gaining popularity. This requires a shift in mindset by IT departments – so-called “cloud speed’ must also be possible in the enterprise data center.
According to a recent study, increasing competition and the accelerating pace of change are the top issues affecting the way companies do business today. Correspondingly, acceleration of product development and quick execution of new ideas are among the highest priorities of the 300 business and IT leaders surveyed.
One surprising result of this survey was that the most frequently mentioned barrier to speed was a “rigid and inflexible technology infrastructure”. More than half (54%) of respondents said their systems are only somewhat capable of supporting rapid innovation.
Yet, with agile software development and flexible IT infrastructure it is now possible to react quickly to changing demands – public cloud providers demonstrate this every day. For example, it can take several days or weeks to provide a new server in a traditional corporate data center, while a new AWS virtual server is available in minutes.
Some Internet companies are able to implement as many as 50 updates to their platform per day. This speed is enabled by DevOps practices with continuous updating of code (Continuous Delivery) and automated deployment.
The need for this type of business speed is driving the turnover of public cloud services, with the market for public cloud Infrastructure as a Service expected to grow by more than 38 percent this year, from 16.2 to 22.4 billion US-Dollars.
But internal IT remains indispensable due to data security concerns, regulations, and internal policies – or simply because the IT organization is not ready to rationalize itself. So how can in-house enterprise IT achieve so-called “cloud-speed’ agility?
Building in flexibility
As the Oxford Economics survey pointed out, corporate infrastructure is often not designed for the continuous provisioning of new IT services. Rather, it is grown over time and in many cases technically and organizationally divided into server, storage and network.
The IT industry has long since developed alternatives to these silos, with converged systems that bundle the compute, storage and network resources into integrated systems. This was supposed to make provisioning and scaling of infrastructure similar to constructing with a Lego kit.
Indeed, today's IT infrastructures are like construction kits containing predefined bricks that can be assembled together. But, the flexibility that is needed in the era of agile software development requires a pool of resources that are stateless – and immediately ready for the operation of new workloads. Such a resource pool must allow us to combine, configure and scale capacity automatically and in real time. This concept, which HPE pioneered, is known as “composable infrastructure,” an infrastructure which can be composed, de-composed and re-composed according to the specific and current need of any application.
To make the operation of this type of infrastructure less complex, a high level of automation is required. In a traditional environment, an IT administrator would have to be an expert in all areas to achieve comprehensive automation: servers, storage, network, operating system, virtualization and containers – otherwise, the risk of failure would be too high. That’s why, with composable infrastructure solutions such as HPE Synergy, provisioning and scaling can be left to automatisms within the infrastructure. These can choose the appropriate resources and optimal configurations, as well as detecting whether a desired configuration is absurd or technically impossible.
Administrators can also reduce complexity by addressing resources through a unified API. Conventional servers, storage and network devices are all controlled via different APIs, with different data and error code formats. For a composable infrastructure, managing all of these would be too time-consuming and error-prone. A high-level API, as available with HPE Synergy, is required which hides the interaction of the lower-level APIs from the user. This means it is possible to set up new environments with just a few lines of code.
Cloud speed instead of two-speed IT
Public cloud providers achieve tremendous efficiency through the fully automated operation of millions of servers. However, they must invest in enormous upfront effort to achieve this state and, unlike the major cloud service providers, corporate IT organizations have the added challenge of ensuring the reliable operation of their traditional IT. They live in a world of two separate modes of IT delivery, known as “bimodal IT” – running existing infrastructure while incorporating new innovations, all at speed. With the interoperability made possible by composable infrastructure and HPE Synergy, enterprise IT can now span both of these worlds, moving at cloud speed rather than two-speed.
 Oxford Economics: „Business at the Speed of Thought: Accelerating Value Creation“