5 cloud computing predictions for 2016
The salty tang in the air is the fragrance of a sea change in IT – a tidal shift that will change the role of IT and IT participants over the next five years. I believe that 2016 will go down as the year in which the future of IT appears out of a murky, dank, blinding fog into the clear sunshine of the shape of things to come. And it’s safe to say that no traditional IT practitioners – vendor and enterprise IT organization alike – will emerge unscathed from this transition.
1. Death of the enterprise public cloud
For years incumbent vendors pooh-poohed the rise of AWS, dismissing it as the sandbox of immature users – SMBs and startups. When AWS became too large to dismiss so blithely, incumbents vendors (both on-premise giants like HP and hosting providers like Terramark) proclaimed that what “real” users needed was an “enterprise cloud” provided by an organization that “understands enterprise needs.”
This led to several years of conference booths splashed with taglines like “XXX delivers the enterprise cloud” and “YYY enterprise cloud: delivering the best cloud infrastructure.” The clear implication of these taglines was that AWS does not provide a really robust cloud sufficient to enterprise needs, but that these companies understand what enterprises require and have built infrastructure to deliver it. Part of the value proposition was commonly use of “enterprise” gear from Cisco and EMC, in contrast to the use of commodity kit by AWS and its cousins like Google and Microsoft; most of the “enterprise” CSPs also used a software infrastructure based on VMware.
There’s only one problem with this: Enterprise IT organizations showed little sign of preferring these “enterprise” offerings, and the proof of this is this year’s actions:
HP threw in the towel on its public cloud offering.
Verizon appeared to tiptoe toward announcing it would sell off its enterprise assets (including Terremark, its cloud offering), then tiptoed back from the precipice by stating it remained committed to its offering and was continuing to invest hundreds of millions of dollars per year. And then word came that it was moving forward with the sale.
CenturyLink announced it was interested in selling off its data center assets. This is not a sure sign that the company was looking to get out of cloud computing, and it insisted that it remains committed to cloud computing … then two of its top cloud execs left the company.
The ongoing Dell/EMC/VMware/Virtustream soap opera indicates, if nothing else, that VMware still has no defined go-forward strategy for public cloud computing.
No matter how these play out, it’s obvious that the bright future envisioned by these companies when they started their “enterprise-grade” cloud path has not developed as forecast. Bluntly stated, it’s clear that enterprises were just fine with “commodity cloud,” thank you very much. It’s become obvious that there are three major cloud providers – AWS, Microsoft and Google – all of which follow the commodity cloud approach – whitebox, self-designed hardware and a self-developed software control plane. This is the winning formula. Any other provider that attempts to deviate from this approach may achieve some success, but it will be by addressing a narrow market niche; it may look successful, but it will pale in comparison to the numbers the main cloud providers will put up on the board.
I believe 2016 will deliver the coup de grace to the “enterprise cloud.” Going forward, every cloud provider with ambitions to be a major market presence will have to fight on the territory of the big players, which will pose its own set of challenges. Among them will be:
Recognizing that the commodity providers’ prices are the market expectation and customers will not pay a premium for putatively superior “enterprise” offerings.
Understanding that a provider’s cost structure has to support these lower prices, and the corollary that use of commodity gear and a “free” (i.e., non-licensed) control plane are therefore required.
Investing the capital required to compete with the large providers. To play in this league requires billions, not millions, even if there are a couple of zeroes in front of the millions.
In short, 2016 will see even more public cloud offerings shuttered, with additional growth being funneled to the dominant providers.
2. Hybrid cloud and cloud brokerage have their day in the sun
The default response for users and vendors giving way to the large public cloud providers is to retreat to what seems to be a more defensible position.
On the part of enterprises, it is to recognize that large public cloud providers will be the location of external workloads, but that many workloads will remain on-premise. This is labelled hybrid cloud, and is assumed to represent the future of enterprise workload management.
On the part of legacy vendors, it is to position themselves as cloud brokers – entities that can assist enterprises in selecting appropriate application deployment locations, help design applications so they can operate properly in a given deployment location, provide software products to aid in managing multiple deployment locations (this is an obvious counterpart to the hybrid approach described above), and/or manage cloud-based applications on behalf of the user.
All of this is true enough – as far as it goes. The biggest problem is that both hybrid cloud and cloud brokerage are vague, ambiguous terms that can be used to describe almost any possible arrangement of application deployment and infrastructure choice.
It’s obvious that enterprises will retain on-premises infrastructure for the foreseeable future. Given the use of public cloud computing there is, ipso facto, hybrid computing. The key question: How much infrastructure will remain on-premises and in what form? One vision proffered for hybrid cloud is an on-premises cloud environment based on, say, OpenStack, which interoperates with a similar public environment. Another vision proffered is an on-premises cloud environment interoperating with a dissimilar public environment. Yet a third is an unchanged on-premises environment, say a vSphere cluster, along with use of some public environment.
Depending upon what form of hybrid cloud one envisions, the appropriate solution varies widely. And the question of what will emerge as the dominant form of hybrid cloud emerges victorious will dictate the fortunes of both users and vendors in the future.
For what it’s worth, I think it’s unlikely that most enterprises will end up implementing an on-premises cloud environment, both for the cost reasons that Matt Asay outlines in this recent blog post, and the ongoing reality that implementing a “true” cloud (i.e., one that implements the NIST definition of cloud computing) is an extremely complex technical undertaking. Beginning in 2016, enterprises will increasingly recognize that they: (1) can’t afford the infrastructure and staff costs to implement a cloud; and (2) more important, want and need to direct their investment and talent toward value-delivering efforts, i.e., applications. The poor prospects of on-premise cloud products was underlined by Citrix’s announcement that it divesting its CloudStack offering.
This year will see endless discussion of hybrid cloud computing. At the end of the year, things will remain pretty much as they stand at the beginning: increasing use of public cloud computing for most new applications, and an unchanged on-premises environment mostly used for existing applications and new applications that look a lot like legacy applications.
As to cloud brokerage, vendors will confront the reality of all distributors – there is a limited amount of margin available for “value added” services. Moreover, vendors will also come to realize that cloud brokerage is more like consulting and less like standardized offerings, which means it is complex and requires highly skilled talent. Anyone who has spent any time around a large consultancy realizes that they operate quite differently than a product vendor – lower margins and a constant hunt for staff utilization opportunities.
I expect that by the end of 2016, most presumptive cloud brokers will come to recognize it’s a difficult offering to bring off successfully – and one that does not afford large overhead and wasteful spending. Look to increased layoffs at the end of 2016 as incumbent vendors recognize (or have the public markets and/or private equity players force them to recognize) that their future looks quite different than their glorious past.
3. The battle of infrastructure is over, the battle of applications is about to begin
Over the holidays I had the opportunity to visit the Churchill War Rooms in London. It contains a museum about the polymath politician, but naturally focuses on the central role he played during the Second World War.
Barely a month after he became Prime Minister, he was forced to announce the humiliating withdrawal of British troops from Dunkirk, but rallied his nation when he noted that “the Battle of France is over. I expect that the Battle of Britain is about to begin” and called upon British citizens to brace themselves to their duties and persevere in maintaining their independence. His stirring speech had the effect of moving beyond recriminations for what had happened and focusing the nation on the challenges ahead.
This year will witness a similar shift in the focus for IT groups. By the end of the year, many, if not most, will recognize that their struggle to meet the speed and functionality of the public providers is fruitless. More important, they will recognize that their major task is to deliver applications, and spending time agonizing over what infrastructure to use is useless, because that debate is over. IT organizations will adopt the fastest infrastructure available now, and then turn to how to deliver a new class of applications.
One way this focus is going to play out will be a ruthless war for talent. There is a limited pool of people who know how to build so-called cloud-native applications, and the struggle to hire them will be enormous.
It will be interesting to see how the HR organizations within companies seeking to bolster their technical staff will respond to this urgent need – will they push salaries up or will they attempt to apply historic rates applicable to run-of-the-mill talent? I’m betting we’ll see lots of sticker shock discussion in the industry as the implications of this new application focus becomes clear.
4. Reengineering enterprise IT
The now-cliched “software is eating the world” mantra trivializes something quite profound: the role of IT is shifting from “support the business” to “be the business.” The ongoing digitization of products and services means enterprise IT is the new factory (i.e., the manufacturing capability that turns out what the company markets and sells). And, just as many US manufacturing firms in the 1970s and 80s came under relentless pressure from new competitors – think Detroit car firms facing the onslaught of Japanese brands – so too will enterprise IT organizations find themselves encountering highly efficient technology organizations that threaten the established firms.
It’s important to remember that Japanese manufacturers arrived with more than a different form factor – small cars that used less gas than the standard American gas guzzlers. The Japanese implemented an entirely different manufacturing approach (pioneered, in a kind of bitter irony, by an American quality guru, W. Edwards Deming).
The net result of this was that Japanese cars were higher-quality, lower-cost and made their manufacturers boatloads of money. The entry of the Japanese car manufacturers changed the playing field for carmakers forever; from the mid-70’s on, if you wanted to be a major car company, you had to meet raised consumer expectations – or go out of business. Detroit struggled for years and finally emerged capable of building cars that matched the Japanese on a quality level, but by the time that occurred over 50% of their market share had vaporized, never to be recovered.
Enterprise IT groups are now being assailed by the same sort of challenge. Companies like Airbnb and Netflix have developed new kinds of organizations, processes, and tooling, and have transformed the world of IT. Instead of once- or twice-a-year releases, these new model companies release once or twice an hour.
CEOs of mainstream companies are looking at their IT organizations and presenting an ultimatum: get good or get out. This year will see IT organizations confronted by the need to transform their performance by reengineering themselves.
Expect to see organizational anguish as those CEO ultimatums get passed down to individuals and groups within IT. The pressure to improve will be relentless, and the penalty for failure will be extreme.
This year will see enterprise IT groups desperately try and figure out how to break down organizational silos and implement DevOps. There will a search for new tools and processes capable of accelerating application delivery. IT organizations will seek to emulate cloud-native companies and come to recognize how much work it requires to achieve continuous integration and delivery – but they’ll redouble their efforts to move toward their capabilities, driven by the awareness that there is no acceptable alternative.
5. Containers emerge as the future of applications
One of the biggest roadblocks for enterprise IT groups to achieve cloud-native capabilities is the primary execution environment they use – virtual machines. That’s not to say you can’t realize a streamlined application lifecycle using virtual machines; after all, companies like Amazon, Uber and Pinterest accomplish this, thank you very much.
It’s just that most enterprise IT groups treat virtual machines like physical machines, and manage them with legacy processes that repeatedly rebuild them at every step in the application lifecycle.
The new standard for application execution environments is containers, and they will revolutionize the practices of enterprise IT. Containers are much more easily migrated among groups and facilitate continuous delivery. I expect to see containers emerge as the foundation for enterprise IT reengineering efforts, once it becomes clear that attempting to accelerate application delivery timeframes is impossible if every group begins by rebuilding virtual machines and installing application components.
There are other advantages to containers, too. For one, they are much lighter weight than virtual machines, which raises hardware utilization rates and improves economics. For another, they start much more quickly than virtual machines. Rapid instantiation supports the elastic characteristics of cloud-native applications, which commonly experience highly erratic workloads.
This year will see not the replacement of virtual machines by containers, but the growing awareness that any future plans for enterprise application must assume containers as the default execution environment. Attempting to achieve the kinds of enormous changes necessary to become cloud-native is simply impossible with typical virtual machine use.
In conclusion, 2016 is the year that I expect to see the implications of what has been happening over the past few years to become starkly obvious. The attempts to incrementally improve enterprise IT with cloud-washed offerings from vendors who collude with staff to continue traditional practices will be exposed as the failure they are. This year will witness a recognition on the part of companies – if not their IT organizations – that incremental improvement just won’t cut it. Far more serious measures are required, and when IT groups begin grappling with how to implement those … well, expect monumental disruption. Over the course of the year, new solutions that take a root-and-branch approach to change will become preferred, and beware IT leaders or personnel who attempt to evade embracing those solutions.
Author: Bernard Golden
Named by Wired.com as one of the 10 most influential people in cloud computing, Bernard Golden serves as vice president of strategy for ActiveState Software, an independent provider of CloudFoundry. He is the author of four books on virtualization and cloud computing, his most recent book being Amazon Web Services for Dummies. Learn more about him at www.bernardgolden.com
Article from: CIO.com