The data center outsourcing (DCO) industry now faces direct competition from the public cloud giants – Amazon, Rackspace and Google. While not everything can go to the cloud, whatever can go to the public cloud will go to the public cloud. The tide clearly favors public cloud providers, who can provide price points and efficiencies traditional players cannot.
The question now becomes: what proportion of workloads currently served by the traditional asset-heavy infrastructure model and the remote infrastructure management (RIM) model is at risk of migrating to the cloud? Application development and testing are good candidates, as are production workloads that experience significant temporal variance in demand and can be augmented by the cloud. Databases are a hard sell, and those sensitive to latency have a higher burden of proof. Workloads that require configurations not provided by the cloud majors’ standardized services are not suitable. Sometimes software licensing in virtualized environments is complicated. Often older, enterprise applications are not suited to deployment in a multi-tenant environment. And, in some cases, the enterprise wants a high level of control over a mission critical production application. Despite these exceptions, an increasing proportion of workloads can be deployed on the public cloud.
In a like-for-like situation, the public cloud majors’ cost structure is hard to match. They enjoy tremendous economies of scale in three areas. First, fixed costs account for more than 50 percent of the costs of delivering compute and storage services, and therefore higher utilization leads to lower cost of operations. Servers and the physical infrastructure are already paid for, so the provider who achieves higher utilization can deliver a lower price point. Workloads of individual applications, and indeed companies and industries, go through peaks and troughs of various periodicities, and for high utilization the workloads must come from numerous clients, industries, and geographies. Utilization is essential to building a sustainable cost advantage and scale is key to utilization, which brings in diversity of demand.
Even variable costs such as the electricity bill have an interesting relationship with utilization. Power consumption does not move linearly with server utilization. A server with less than 10% utilization still consumes over 50% of what it would have consumed if utilization were 100%. A server at 30% utilization consumes 80% of the power it would have required at 100% utilization. Therefore it makes business sense to keep servers operating at 100%. Power cost per unit of computing decreases with increasing utilization.
Second, cloud providers with the highest scale of operations can invest in efficiencies that most cannot.
Amazon claims it has automated data center operations to the point where human resource costs now account for a negligible percentage of total costs. Even if one were to discount the claim sharply, the starting point is still low (rebuttals to Amazon’s claim puts labor costs at 10 percent of costs, for cloud providers). There have been speculations on the server to administrator ratio as well. 100 to 1 is an industry average, and some estimates have put the figure somewhere between 1,000 and 10,000 servers per administrator for cloud providers. The majors’ cloud datacenters are far more automated than the standard corporate data center.
Third, large data centers pay less for all kinds of things including servers, bandwidth, and (particularly) networking gear. The cost advantage of procuring hardware at lower price would be banal if not for the highly specialized nature of such procurement. By most accounts, Amazon and Google acquire networking gear straight from Chinese and Taiwanese manufacturers at prices much lower than that of traditional heavyweights such as Dell, Juniper and Cisco. The cloud heavyweights are also designing their own, squeezing efficiencies that are not possible with off-the-shelf gear.
In this context, matching the cost of capital of a US$75 billion revenues (Amazon) company or a US$60 billion revenues company (Google) presents a challenge. The large-scale cloud provider’s cost base is clearly significantly superior to an enterprise private cloud datacenter’s, both those managed by the enterprise and ITO service provider. However, whether that cost base necessarily translates to lower prices can be up for debate.
One could argue that the cloud is not cheap in the long run, especially if the workloads are predictable. But current prices might not indicative of how low prices can get. The cost base, discussed above, could be a better indicator of the price floor. Amazon has slashed prices 42 times in the last eight years. Also, the company’s financials clearly demonstrated that it can, for its entire portfolio – books to EC2 – live with little or no profits. And, AWS is estimated to account for only about US$4 billion of the firm’s US$75 billion in revenues.
Also, the unit price is not the only lever for price reduction. Cloud providers have started pricing by the minute, as opposed to the hour. Google’s ‘sustained use’ pricing offers discounts for sustained workloads. A sharp discount kicks in after (for example) a month. ‘Sustained use’ discounts offset the argument that workloads with predictable demand profile are more expensive on the cloud, and also offset the need to plan and book capacity in advance.
Only the largest of enterprise software vendors such as IBM have the war chest to take on these giants. And, even for behemoths of the DCO business it might be easier to buy than build. Case in point is IBM, which retired the SmartCloud offering in favor of the acquired cloud-native Softlayer.
Service providers have to co-opt the public cloud, few can compete. Numerous niches are available that involve the technology layers above the hypervisor, and clients that need a solution more managed than self-service. But the easy wins will go and the status quo will not endure. A paradigm shift is certainly underway.