blog

RHCA: RHCA + Renewals (Part Two)

RHCA: RHCA + Renewals (Part Two)

IT industry is one that continues to grow and evolve as new innovations are developed. The pinnacle of expertise, The Red Hat® Certified Architect program is continuously improving to accommodate these demands to make sure that our certified professionals exceed the standards of today’s market.

This article is the second part of a two part series around Red Hat Certified Architects


Part One: RHCA: What does it mean to be a RHCA?
Part Two: RHCA: RHCA + Renewals 

A Short History of RHCA

Two years ago in 2014, we made some changes to the program in order to accommodate the numerous additions to our product and certification portfolios. We moved from a prescriptive approach — earn these specific credentials to earn RHCA — to a more flexible one in which individuals could choose the specializations that could provide the most benefit to their organizations and their own careers. Additionally, we introduced RHCA Level II, which we discussed in the first post of this series.

Last year in 2015, we expanded the reach of RHCA further to include enterprise application developers skilled in the technologies of our expanding middleware portfolio. We made some other changes as well, including our renewal policies, which will be covered in fourth and final post of this series.

This year, we introduced a new concentration: DevOps. To earn this concentration, you must have skills in container and configuration management and automation, covering OpenShift by Red Hat, Ansible, and Puppet.

Overview of certification renewals

All Red Hat certifications are considered current for three years from the date they are earned. Earning certifications beyond Red Hat Certified System Administrator (RHCSA), Red Hat Certified Engineer (RHCE®), and Red Hat Certified JBoss Developer (RHCJD) extends the period those certifications are considered current.

Changes to RHCA renewals

When we made changes to the RHCA program in 2015, we altered what it takes to renew a RHCA certification. Now, we require an RHCA to maintain at least five current credentials beyond RHCE or RHCJD, and they do not need to be the same credentials you used when you first earned RHCA.

As this was a major change in the certification program, we extended the period for which all certifications earned prior to October 1, 2014 would be considered current. We made them current until October 1, 2017 — even if they had been earned in 2005.

What that means

To maintain the RHCA status, you must have at least five eligible credentials beyond RHCE or RHCJD that are current. If you have earned more than five eligible credentials beyond RHCE or RHCJD, you are considered a higher-level RHCA. For example, someone who has earned six beyond RHCE or RHCJD is an RHCA Level II.

To remain an RHCA, you do not need to renew your RHCE or RHCJD certifications. In fact, the period for which these are considered current gets extended with each additional eligible credential you earn.

Example Scenario

Perhaps you are an RHCA Level II and the oldest of the six credentials you had earned past RHCE or RHCJD becomes non-current. You would then be an RHCA Level I. If another credential became non-current, you would not longer have RHCA status. You do not need to start completely over to regain your RHCA status, however. Earning a new eligible credential or renewing one that became non-current in our example would restore your RHCA status until the next oldest credential was up for renewal.

I hope I have shed some light on the renewal policies for RHCA and what you need to do to maintain RHCA status. Earning RHCA is a steep hill to climb. Keeping an RHCA means continuing to climb and committing to the climb in a world that can sometimes seem like a race to the bottom.

Becoming an RHCA is a significant investment of time and effort, as is maintaining one’s RHCA. If you are pursuing a RHCA certification, Red Hat Learning Subscription is a convenient, cost-effective way to get the training you need. If you have already earned RHCA, you are eligible for a 50 percent discount.

_____

Connect with Red Hat Consulting, Training, Certification

Learn more about Red Hat Consulting
Learn more about Red Hat Training
Learn more about Red Hat Certification
Subscribe to the Training Newsletter
Follow Red Hat Training on Twitter
Like Red Hat Training on Facebook
Watch Red Hat Training videos on YouTube
Follow Red Hat Certified Professionals on LinkedIn

What does it mean to be an RHCA

RHCA: What does it mean to be an RHCA? (Part One)

Red Hat Certification has worked to make sure that IT professionals are certified to show they are proficient in their chosen field and to ensure employers hire the best candidates to solve problems and complete IT projects effectively and efficiently.

This article is the first part of a two part series around Red Hat Certified Architects


Part One: RHCA: What does it mean to be a RHCA?
Part Two: RHCA: RHCA + Renewals 

What is an RHCA?

A Red Hat® Certified Architect (RHCA) is a Red Hat Certified Engineer (RHCE®) or Red Hat Certified JBoss Developer (RHCJD) who attained Red Hat’s highest level of certification by passing and keeping current five additional hands-on exams on Red Hat technologies.

IT professionals are able to select what they wish to focus on or choose from a suggested concentration on either DevOps, Cloud, Datacenter, Enterprise applications development, and Application platform.

What are RHCA Levels?

Once upon a time, there were just 5 specific exams one could take beyond RHCE in order to earn RHCA. Over the years, we have added new exams and certifications to our catalog. RHCAs were among those earning these new certifications but earning them provided no incremental benefit or recognition to their RHCA status.

Two years ago, we changed the policy so that IT professionals no longer had to take five specific exams. Additionally, we decided to recognize the RHCAs that went above and beyond the five-exam requirement, and we introduced RHCA levels.

RHCA Levels show that an IT professional that has passed more than five exams within the RHCA path, which certifies/proves that this individuals has an even broader understanding of the Red Hat product portfolio. For example, an IT professional that has passed seven exams would be a RHCA Level III.

An accomplishment that recognizes the the most elite IT professionals, becoming an RCHA is the highest level of certification Red Hat offers.

What it takes to be an RHCA

As we discussed in part one of this series, a Red Hat® Certified Architect (RHCA) is a Red Hat Certified Engineer (RHCE®) or Red Hat Certified JBoss Developer (RHCJD) who attained Red Hat’s highest level of certification by passing and keeping current five additional hands-on exams on Red Hat technologies.

What that means

RHCE and RHCJD have significantly different job roles and skills sets, which means that the certifications that an IT professional can apply towards earning a RHCA is dependent on if you are RHCE or RHCJD. Though, there are a handful of certifications/exams that can be earned by both.

In some cases, especially for the DevOps concentration, certifications may be applied for either a RHCA or RHCJD.

Eligible certifications for RHCE and RHCJD to apply towards RHCA

RHCE

  • Red Hat Certified System Administrator in Red Hat OpenStack
  • Red Hat Certificate of Expertise in Hybrid Cloud Management
  • Red Hat Certificate of Expertise in Application Server Management
  • Red Hat Certificate of Expertise in Container Management
  • Red Hat Certified JBoss Administrator (RHCJA)
  • Red Hat Certificate of Expertise in Containerized Application Development
  • Red Hat Certificate of Expertise in Platform-as-a-Service
  • Red Hat Certified Engineer in Red Hat OpenStack
  • Red Hat Certified Virtualization Administrator
  • Red Hat Certificate of Expertise in Red Hat Enterprise Linux Diagnostics and Troubleshooting
  • Red Hat Certificate of Expertise in Deployment and Systems Management
  • Red Hat Certificate of Expertise in Configuration Management with Puppet
  • Red Hat Certificate of Expertise in Server Hardening
  • Red Hat Certificate of Expertise in High Availability Clustering
  • Red Hat Certificate of Expertise in Performance Tuning
  • Red Hat Certificate of Expertise in Data Virtualization
  • Red Hat Certificate of Expertise in Security: Network Services (Retired)
  • Red Hat Certificate of Expertise in Directory Services and Authentication (Retired)
  • Red Hat Certificate of Expertise in SELinux Policy Administration

RHCJD

  • Red Hat Certificate of Expertise in Application Server Management
  • Red Hat Certified JBoss Administrator (RHCJA)
  • Red Hat Certificate of Expertise in Container Management
  • Red Hat Certificate of Expertise in Containerized Application Development
  • Red Hat Certificate of Expertise in Platform-as-a-Service
  • Red Hat Certificate of Expertise in Persistence
  • Red Hat Certificate of Expertise in Configuration Management with Puppet
  • Red Hat Certificate of Expertise in Camel Development
  • Red Hat Certificate of Expertise in Data Virtualization
  • Red Hat Certificate of Expertise in Fast-Cache Application Development
  • Red Hat Certificate of Expertise in Business Rules
  • Red Hat Certificate of Expertise in Business Process Design

BOTH RHCE RHCJD

  • Red Hat Certificate of Expertise in Application Server Management
  • Red Hat Certified JBoss Administrator (RHCJA)
  • Red Hat Certificate of Expertise in Container Management
  • Red Hat Certificate of Expertise in Containerized Application Development
  • Red Hat Certificate of Expertise in Platform-as-a-Service
  • Red Hat Certificate of Expertise in Persistence
  • Red Hat Certificate of Expertise in Camel Development
  • Red Hat Certificate of Expertise in Configuration Management with Puppet
  • Red Hat Certificate of Expertise in Data Virtualization

 

Visit the RHCA page for more details.

____

Connect with Red Hat Consulting, Training, Certification

Learn more about Red Hat Consulting
Learn more about Red Hat Training
Learn more about Red Hat Certification
Subscribe to the Training Newsletter
Follow Red Hat Training on Twitter
Like Red Hat Training on Facebook
Watch Red Hat Training videos on YouTube
Follow Red Hat Certified Professionals on LinkedIn

 

 

 

 

5 Private Cloud Providers Compared

5 Private Cloud Providers Compared

Choosing the right private cloud solution is not an easy task. Here’s how private cloud options from Microsoft, VMware, OpenStack, CloudStack and Platform9 compare in terms of management, compatibility, complexity and security.

Getting the Most From Cloud Computing

Cloud Computing is one of the most hyped and publicized trends in IT. That’s because, done effectively, a cloud-based, ‘virtualized’ infrastructure can offer advantages over traditional datacenter buildouts in the areas of performance, scalability, and even security. As they develop their strategies for implementing cloud computing, many organizations are facing a choice: to deploy a private cloud or leverage a public cloud. So what are the differences between the two, and which is right for you?

Public Cloud vs Private Cloud

Generally speaking, a public cloud consists of a service or set of services that are purchased by a business or organization and delivered via the Internet by a third-party provider. These services use storage capacity and processor power that is not owned by the business itself. Instead, this capacity (in the form of servers and datacenters) can be owned either by the primary vendor (e.g. an online storage/backup company) or by a cloud infrastructure vendor.

A private cloud is essentially an extension of an enterprise’s traditional datacenter that is optimized to provide storage capacity and processor power for a variety of functions. “Private” refers more to the fact that this type of platform is a non-shared resource than to any security advantage.

Private cloud software has to work with the virtualization layer that is providing the resources, enabling a management interface for the self-service aspect along with a management interface for the IT administrator. All of this has to be accomplished at a reasonable price and with adequate support if you plan on making your private cloud a part of your core business strategy. With all software investments today, and especially when it comes to virtualization and cloud software, the additional question of going with open source also has to be taken into consideration with the pros and cons that come with it. Thus below, you’ll notice a mix of proprietary as well as open source private cloud options and what each has to offer.

Microsoft Private Cloud

While Microsoft was a bit late to the virtualization and cloud arena, the software giant has spent considerable resources catching up and leveraging experience to avoid some of the mistakes other vendors in this space have made. Microsoft’s private cloud software is part of the System Center 2012 R2 offering. System Center incorporates several products under one umbrella including Virtual Machine Manager, Data Protection Manager, Endpoint protection and Operations Manager.

  • Microsoft’s System Center will support and centrally manage Windows 2012 Hyper-V hosts along with third party hypervisors from Citrix and VMware; KVM is a notable exception that is not included at this time. While the hypervisors are agnostic, the current lack of active third party network providers, including Cisco, is worth noting and may be limiting to some customers.
  • Microsoft’s private cloud offering focuses on the application life cycle combined with automation and monitoring in one package. This, coupled with a straightforward ability to create self-service portals based on mature IIS features, helps with the installation process. Leveraging the .NET framework does allow for additional extensions and troubleshooting.
  • Security can be leveraged off of existing Active Directory resources without the complexity of setting up single sign on (SSO). However this can open up additional security risks based on existing Windows Server vulnerabilities.
  • While the pieces in the package may not go feature for feature when compared to other offerings, the single SKU does make licensing and purchasing easier.

VMware vCloud Suite Private Cloud

VMware is one of the oldest players in the virtualization market and has an established record of performance and reliability. VMware’s products have the ability to scale to some of the most demanding workloads. VMware has incorporated several products into its private cloud offering allowing customers to choose ala cart what features they would like to use. VMware’s vCloud Suite comes in three different versions (Standard, Advanced and Enterprise) with each incorporating additional products and features.

  • VMware vCloud Suite does support other hypervisors, including Hyper-V and KVM, however extensive single pane of glass management is not advertised, as the favored hypervisor is ESXi.
  • The Advanced edition of vCloud Suites adds vRealize Business for vSphere and Enterprise version includes vCenter Site Recovery Manager on top of the vRealize Business Suite. vRealize Business offers cost, usage and metering ability while vCenter Site Recovery Manager is policy-based disaster recovery.
  • The vCloud Suite is packaged and requires license upgrades together as a single entity, however it is a collection of separate products, which can lead to confusion with support, and installation when compared to other offerings.
  • Frequent product name changes lead to confusion on what the products do, which ones are needed, upgradable or even owned in some cases.
  • vCloud has the ability to integrate with VMware’s software defined networking offering NSX, however this is an additional licensing fee with all versions of vCloud.
  • Established enterprise class security at the hypervisor and network layers for easy user integration with Active Directory single sign on and its complexities are required.
  • Support for VMware Virtual SAN and OpenStack allow for flexibility in both storage and third party cloud integration tools.

OpenStack Private Cloud

OpenStack is one of the most popular open source cloud operating options today. It has the ability to manage compute, storage and networking and deploy them is an easy to use, but somewhat feature limited dashboard. Unlike VMware and Microsoft, OpenStack does not have its own hypervisor.

  • OpenStack can be used with VMware’s ESXi, Microsoft’s Hyper-V, or Citrix Xen, however it is most often used with KVM, which is also open source. With more vendors including OpenStack APIs this will encourage more adoption.
  • OpenStack supports a wide range of software and hardware due to its open source nature. This allows for flexible architecture that can support both legacy and new hardware platforms.
  • While the capital cost of the software is free (since it’s open source) the soft cost in training your IT staff will have to be accounted for.
  • Similar to other open source offerings, a community support model is in place over paid maintenance, this may require trained staff for immediate support needs.
  • With so many community developers and quality feedback from users the complexity of installation and operation is simpler than might be expected. OpenStack is directly focused on the cloud platform and does not have additional pieces which reduces complexity.

Platform9 OpenStack Private Cloud

Platform9 is a private cloud provider based on OpenStack that does not reside onsite. Platform9 uses an OpenStack as a service model to allow organizations to manage their private clouds from an external cloud. While this might seem a bit unusual for a private cloud solution, the key point to remember is that your data still resides inside your data center, it’s simply the management piece that is external to your company.

  • Platform9 currently supports the KVM hypervisor with VMware ESXi in beta testing. Currently there are no plans for Hyper-V or Citrix Xen listed.
  • Being a hosted solution there is no software to install or upgrade. Platform9 handles all patches and upgrades to the OpenStack core.
  • Simplified user portal, image management and infrastructure discovery allow for reduced administrative overhead while leveraging the scalability and reliability of OpenStack.
  • Policy based deployments with infrastructure monitoring brings many of the popular public cloud features into the private cloud space without limited complexity.
  • No capital costs to get started; the solution is priced as a monthly service fee.

Apache CloudStack Private Cloud

Another contender in the open source private cloud space is Apache’s CloudStack. The CloudStack solution supports a wide range of hypervisors from VMware, Microsoft, Citrix and KVM. CloudStack is offered as a complete solution minus the hypervisor, allowing companies to have a robust management interface, usage metering and image deployment. Storage tiering and Active Directory are also included with limited software defined networking.

  • Unlike OpenStack, which focuses on the core cloud aspect, CloudStack is looking to provide everything under the single open source umbrella.
  • CloudStack is an economical approach to the private cloud with many popular features included. One concern is the quality of those features, along with the support that exists with open source software.
  • It includes a Java-based management agent, which may cause some concerns regarding performance, security and version splintering.

Private Cloud Feature Comparison

Microsoft Private Cloud
Compatibility Single pane of glass for Hyper-V, VMware ESXi and Citrix Xen servers, although detail of management support is not clear.
Complexity Single product makes it easy (but expensive) to purchase. Because System Center contains so many related products, installation and upgrades can be complex.
Security Relies on the existing Active Directory / IIS frameworks allowing for ease of operations.
Summary Microsoft’s private cloud with System Center is a good option businesses looking to get started with a private cloud. Cost is packaged, but System Center itself is a very complex product and a bit large in overhead when compared to some of the other offerings.
VMware vCloud Suite Private Cloud
Compatibility Ability to manage top three hypervisors, but limited information on to what detail.
Complexity Suite variants with add-ons can make purchasing complex. Products are separate features making installing and upgrades more complex.
Security Mature security in core products, integration with Active Directory requires single sign on.
Summary VMware’s vCloud Suite is a collection of best of breed technologies that allow for enterprise class performance, reliability and scalability. The ala cart approach combined with frequent name changes does increase the complexity for licensing, purchasing, installing and maintaining, even as the majority of products are bundled in a loose collection.
OpenStack Private Cloud
Compatibility Primary focus is on KVM but VMware’s hypervisor is gaining support. Limited attention on Citrix Xen and Microsoft Hyper-V.
Complexity Fairly easy to get started with community created documentation.
Security Security is based on security domains and trusts. Third-party LDAP server can do authentication; support for multi-factor authentication is available as well.
Summary OpenStack is a private cloud offering that can scale from SMBs to larger enterprises. While there is concern over support with open source software, a robust community does exist. With the open source nature of OpenStack, several vendors have used it as their cloud layer, adding additional features and functions.
Platform9 OpenStack Private Cloud
Compatibility Official support for KVM; VMware hypervisor support is in beta. No published plans for Citrix Xen or Microsoft Hyper-V at this time.
Complexity Quick and easy to get started with as there is no software to install.
Security Management is being done offsite, which could present security concerns for some organizations.
Summary Platform9 is the cloud platform for the folks that want many of the features of a cloud without the management complexity. While you won’t find some of the advanced features that other vendors include, such as software defined networking, Platform9 will appeal to those looking to get started without the lead-time and costs of many of the other solutions. The biggest hurtle with Platform9 today is the lack of full VMware and Hyper-V support.
Apache CloudStack Private Cloud
Compatibility Supports VMware ESXi, Citrix Xen, Microsoft Hyper-V and KVM. No single pane of glass for detailed management.
Complexity Multiple products under a single open source umbrella may present challenges for installation and configuration, coupled with primary support being forum based is cause for concerns.
Security While support for Active Directory exists, along with security zones, the inclusion of Java-based management agents may present additional security concerns.
Summary CloudStack is a complete open source private cloud offering that can scale up to enterprise needs. However, with limited customer references, until CloudStack receives more exposure in the enterprise space it may be more suited for smaller or classroom deployments.

5 cloud computing predictions for 2016

5 cloud computing predictions for 2016

The salty tang in the air is the fragrance of a sea change in IT – a tidal shift that will change the role of IT and IT participants over the next five years. I believe that 2016 will go down as the year in which the future of IT appears out of a murky, dank, blinding fog into the clear sunshine of the shape of things to come. And it’s safe to say that no traditional IT practitioners – vendor and enterprise IT organization alike – will emerge unscathed from this transition.

1. Death of the enterprise public cloud

For years incumbent vendors pooh-poohed the rise of AWS, dismissing it as the sandbox of immature users – SMBs and startups. When AWS became too large to dismiss so blithely, incumbents vendors (both on-premise giants like HP and hosting providers like Terramark) proclaimed that what “real” users needed was an “enterprise cloud” provided by an organization that “understands enterprise needs.”

This led to several years of conference booths splashed with taglines like “XXX delivers the enterprise cloud” and “YYY enterprise cloud: delivering the best cloud infrastructure.” The clear implication of these taglines was that AWS does not provide a really robust cloud sufficient to enterprise needs, but that these companies understand what enterprises require and have built infrastructure to deliver it. Part of the value proposition was commonly use of “enterprise” gear from Cisco and EMC, in contrast to the use of commodity kit by AWS and its cousins like Google and Microsoft; most of the “enterprise” CSPs also used a software infrastructure based on VMware.

There’s only one problem with this: Enterprise IT organizations showed little sign of preferring these “enterprise” offerings, and the proof of this is this year’s actions:

HP threw in the towel on its public cloud offering.
Verizon appeared to tiptoe toward announcing it would sell off its enterprise assets (including Terremark, its cloud offering), then tiptoed back from the precipice by stating it remained committed to its offering and was continuing to invest hundreds of millions of dollars per year. And then word came that it was moving forward with the sale.
CenturyLink announced it was interested in selling off its data center assets. This is not a sure sign that the company was looking to get out of cloud computing, and it insisted that it remains committed to cloud computing … then two of its top cloud execs left the company.
The ongoing Dell/EMC/VMware/Virtustream soap opera indicates, if nothing else, that VMware still has no defined go-forward strategy for public cloud computing.

No matter how these play out, it’s obvious that the bright future envisioned by these companies when they started their “enterprise-grade” cloud path has not developed as forecast. Bluntly stated, it’s clear that enterprises were just fine with “commodity cloud,” thank you very much. It’s become obvious that there are three major cloud providers – AWS, Microsoft and Google – all of which follow the commodity cloud approach – whitebox, self-designed hardware and a self-developed software control plane. This is the winning formula. Any other provider that attempts to deviate from this approach may achieve some success, but it will be by addressing a narrow market niche; it may look successful, but it will pale in comparison to the numbers the main cloud providers will put up on the board.

I believe 2016 will deliver the coup de grace to the “enterprise cloud.” Going forward, every cloud provider with ambitions to be a major market presence will have to fight on the territory of the big players, which will pose its own set of challenges. Among them will be:

Recognizing that the commodity providers’ prices are the market expectation and customers will not pay a premium for putatively superior “enterprise” offerings.
Understanding that a provider’s cost structure has to support these lower prices, and the corollary that use of commodity gear and a “free” (i.e., non-licensed) control plane are therefore required.
Investing the capital required to compete with the large providers. To play in this league requires billions, not millions, even if there are a couple of zeroes in front of the millions.
In short, 2016 will see even more public cloud offerings shuttered, with additional growth being funneled to the dominant providers.

2. Hybrid cloud and cloud brokerage have their day in the sun

The default response for users and vendors giving way to the large public cloud providers is to retreat to what seems to be a more defensible position.

On the part of enterprises, it is to recognize that large public cloud providers will be the location of external workloads, but that many workloads will remain on-premise. This is labelled hybrid cloud, and is assumed to represent the future of enterprise workload management.

On the part of legacy vendors, it is to position themselves as cloud brokers – entities that can assist enterprises in selecting appropriate application deployment locations, help design applications so they can operate properly in a given deployment location, provide software products to aid in managing multiple deployment locations (this is an obvious counterpart to the hybrid approach described above), and/or manage cloud-based applications on behalf of the user.

All of this is true enough – as far as it goes. The biggest problem is that both hybrid cloud and cloud brokerage are vague, ambiguous terms that can be used to describe almost any possible arrangement of application deployment and infrastructure choice.

It’s obvious that enterprises will retain on-premises infrastructure for the foreseeable future. Given the use of public cloud computing there is, ipso facto, hybrid computing. The key question: How much infrastructure will remain on-premises and in what form? One vision proffered for hybrid cloud is an on-premises cloud environment based on, say, OpenStack, which interoperates with a similar public environment. Another vision proffered is an on-premises cloud environment interoperating with a dissimilar public environment. Yet a third is an unchanged on-premises environment, say a vSphere cluster, along with use of some public environment.

Depending upon what form of hybrid cloud one envisions, the appropriate solution varies widely. And the question of what will emerge as the dominant form of hybrid cloud emerges victorious will dictate the fortunes of both users and vendors in the future.

For what it’s worth, I think it’s unlikely that most enterprises will end up implementing an on-premises cloud environment, both for the cost reasons that Matt Asay outlines in this recent blog post, and the ongoing reality that implementing a “true” cloud (i.e., one that implements the NIST definition of cloud computing) is an extremely complex technical undertaking. Beginning in 2016, enterprises will increasingly recognize that they: (1) can’t afford the infrastructure and staff costs to implement a cloud; and (2) more important, want and need to direct their investment and talent toward value-delivering efforts, i.e., applications. The poor prospects of on-premise cloud products was underlined by Citrix’s announcement that it divesting its CloudStack offering.

This year will see endless discussion of hybrid cloud computing. At the end of the year, things will remain pretty much as they stand at the beginning: increasing use of public cloud computing for most new applications, and an unchanged on-premises environment mostly used for existing applications and new applications that look a lot like legacy applications.

As to cloud brokerage, vendors will confront the reality of all distributors – there is a limited amount of margin available for “value added” services. Moreover, vendors will also come to realize that cloud brokerage is more like consulting and less like standardized offerings, which means it is complex and requires highly skilled talent. Anyone who has spent any time around a large consultancy realizes that they operate quite differently than a product vendor – lower margins and a constant hunt for staff utilization opportunities.

I expect that by the end of 2016, most presumptive cloud brokers will come to recognize it’s a difficult offering to bring off successfully – and one that does not afford large overhead and wasteful spending. Look to increased layoffs at the end of 2016 as incumbent vendors recognize (or have the public markets and/or private equity players force them to recognize) that their future looks quite different than their glorious past.
3. The battle of infrastructure is over, the battle of applications is about to begin

Over the holidays I had the opportunity to visit the Churchill War Rooms in London. It contains a museum about the polymath politician, but naturally focuses on the central role he played during the Second World War.

Barely a month after he became Prime Minister, he was forced to announce the humiliating withdrawal of British troops from Dunkirk, but rallied his nation when he noted that “the Battle of France is over. I expect that the Battle of Britain is about to begin” and called upon British citizens to brace themselves to their duties and persevere in maintaining their independence. His stirring speech had the effect of moving beyond recriminations for what had happened and focusing the nation on the challenges ahead.

This year will witness a similar shift in the focus for IT groups. By the end of the year, many, if not most, will recognize that their struggle to meet the speed and functionality of the public providers is fruitless. More important, they will recognize that their major task is to deliver applications, and spending time agonizing over what infrastructure to use is useless, because that debate is over. IT organizations will adopt the fastest infrastructure available now, and then turn to how to deliver a new class of applications.

One way this focus is going to play out will be a ruthless war for talent. There is a limited pool of people who know how to build so-called cloud-native applications, and the struggle to hire them will be enormous.

It will be interesting to see how the HR organizations within companies seeking to bolster their technical staff will respond to this urgent need – will they push salaries up or will they attempt to apply historic rates applicable to run-of-the-mill talent? I’m betting we’ll see lots of sticker shock discussion in the industry as the implications of this new application focus becomes clear.

4. Reengineering enterprise IT

The now-cliched “software is eating the world” mantra trivializes something quite profound: the role of IT is shifting from “support the business” to “be the business.” The ongoing digitization of products and services means enterprise IT is the new factory (i.e., the manufacturing capability that turns out what the company markets and sells). And, just as many US manufacturing firms in the 1970s and 80s came under relentless pressure from new competitors – think Detroit car firms facing the onslaught of Japanese brands – so too will enterprise IT organizations find themselves encountering highly efficient technology organizations that threaten the established firms.

It’s important to remember that Japanese manufacturers arrived with more than a different form factor – small cars that used less gas than the standard American gas guzzlers. The Japanese implemented an entirely different manufacturing approach (pioneered, in a kind of bitter irony, by an American quality guru, W. Edwards Deming).

The net result of this was that Japanese cars were higher-quality, lower-cost and made their manufacturers boatloads of money. The entry of the Japanese car manufacturers changed the playing field for carmakers forever; from the mid-70’s on, if you wanted to be a major car company, you had to meet raised consumer expectations – or go out of business. Detroit struggled for years and finally emerged capable of building cars that matched the Japanese on a quality level, but by the time that occurred over 50% of their market share had vaporized, never to be recovered.

Enterprise IT groups are now being assailed by the same sort of challenge. Companies like Airbnb and Netflix have developed new kinds of organizations, processes, and tooling, and have transformed the world of IT. Instead of once- or twice-a-year releases, these new model companies release once or twice an hour.

CEOs of mainstream companies are looking at their IT organizations and presenting an ultimatum: get good or get out. This year will see IT organizations confronted by the need to transform their performance by reengineering themselves.

Expect to see organizational anguish as those CEO ultimatums get passed down to individuals and groups within IT. The pressure to improve will be relentless, and the penalty for failure will be extreme.

This year will see enterprise IT groups desperately try and figure out how to break down organizational silos and implement DevOps. There will a search for new tools and processes capable of accelerating application delivery. IT organizations will seek to emulate cloud-native companies and come to recognize how much work it requires to achieve continuous integration and delivery – but they’ll redouble their efforts to move toward their capabilities, driven by the awareness that there is no acceptable alternative.

5. Containers emerge as the future of applications
One of the biggest roadblocks for enterprise IT groups to achieve cloud-native capabilities is the primary execution environment they use – virtual machines. That’s not to say you can’t realize a streamlined application lifecycle using virtual machines; after all, companies like Amazon, Uber and Pinterest accomplish this, thank you very much.

It’s just that most enterprise IT groups treat virtual machines like physical machines, and manage them with legacy processes that repeatedly rebuild them at every step in the application lifecycle.

The new standard for application execution environments is containers, and they will revolutionize the practices of enterprise IT. Containers are much more easily migrated among groups and facilitate continuous delivery. I expect to see containers emerge as the foundation for enterprise IT reengineering efforts, once it becomes clear that attempting to accelerate application delivery timeframes is impossible if every group begins by rebuilding virtual machines and installing application components.

There are other advantages to containers, too. For one, they are much lighter weight than virtual machines, which raises hardware utilization rates and improves economics. For another, they start much more quickly than virtual machines. Rapid instantiation supports the elastic characteristics of cloud-native applications, which commonly experience highly erratic workloads.

This year will see not the replacement of virtual machines by containers, but the growing awareness that any future plans for enterprise application must assume containers as the default execution environment. Attempting to achieve the kinds of enormous changes necessary to become cloud-native is simply impossible with typical virtual machine use.

In conclusion, 2016 is the year that I expect to see the implications of what has been happening over the past few years to become starkly obvious. The attempts to incrementally improve enterprise IT with cloud-washed offerings from vendors who collude with staff to continue traditional practices will be exposed as the failure they are. This year will witness a recognition on the part of companies – if not their IT organizations – that incremental improvement just won’t cut it. Far more serious measures are required, and when IT groups begin grappling with how to implement those … well, expect monumental disruption. Over the course of the year, new solutions that take a root-and-branch approach to change will become preferred, and beware IT leaders or personnel who attempt to evade embracing those solutions.

Author: Bernard Golden

Named by Wired.com as one of the 10 most influential people in cloud computing, Bernard Golden serves as vice president of strategy for ActiveState Software, an independent provider of CloudFoundry. He is the author of four books on virtualization and cloud computing, his most recent book being Amazon Web Services for Dummies. Learn more about him at www.bernardgolden.com

 

Article from: CIO.com

911 for Red Hat Linux Training in Kochi India

Sappio Consultancy Services, the 911 for Red Hat Linux Training in Kochi India:

#SanjuRaj an adept #innovator who has taken #Sappio ahead with his #commitment,deeply held values and years of #practise and #perfection.


Having cleared the rigorous and hands-on #RHCA #Level IX,#SanjuRaj has now propelled himself in his potential rich field and added a fine feather on his #success cap.
#Learn from his #expertise
Nurture your #opensource talent
#Boost your skills
Keep pace with the #ubiquitous and evolving #technology

 

#RHCI #RHCE #RHCSS #RHCDS #RHCVA #EX210 #EX310 #EX442#EX436 #EX401 #EX236 #EX318 #EX413 #EX333 #EX423 #EX429 #EX248 #EX200 #EX300 #Linuxtraining #RedHat

Red Hat Certification Result Mar 2016

Red Hat Certification Result Mar 2016

Congrats for Achieving  Red Hat Certified Architect – RHCA

Exams are not just a test of brilliance, but the perseverance to be brilliant constantly. Congratulations.

Congrats Kiran KH for Achieving your Red Hat Certified Architect (RHCA) Certification.


sappio.in

Red Hat Certification Result Feb 2016

Red Hat Certification Result Feb 2016

“Happiness does not come from doing easy work but from the afterglow of satisfaction that comes after the achievement of a difficult task that demanded our best.”

Congratulations to James Johnson and Gaurav Kumar Gupta for the achievement.

 

Sappio Red Hat Schedules for Jan 2016 at Kochi India

Sappio Red Hat Schedules for Jan 2016 at Kochi India

We are glad to announce our upcoming Instructor led Training on Cloud Platform/OpenStack (CL210).
i.e RHCSA in OpenStack

Our course covers all the fundamentals of Infrastructure as a Service (IaaS) to give you that edge that makes you a OpenStack Administrator.

Course Dates: Jan 21(Thu) – Jan 24(Sun), 2016

 

 

Sappio Red Hat Schedules for Dec 2015 at Kochi India

Sappio Red Hat Schedules for Dec 2015 at Kochi India

CL210- RED HAT OPENSTACK ADMINISTRATION BATCH @ KOCHI

Conducted by Red Hat Certified Instructor (RHCI)

Get the best cloud certification from Red Hat @ Sappio Consultancy Services.

The best Linux resource in India with all instructors having Red Hat Certified Architect(RHCA) certification.

 

Red Hat Certification Result Nov 2015

Sharing Joy Of Success!!

Our first Red Hat certification exam resulted in 100% success.

Congrats candidates and Training Team driven by all Red Hed Hat Certified Architects.