a woman pointing to the illuminated bulb

Container-native virtualisation: Killing off the competition

During Red Hat’s annual technology conference in San Francisco, most of the discussions that went on among customers, their reseller ecosystem and in fact even the media, was based upon container technology which proposes a different way to developing software.

Red Hat’s container platform is called OpenShift, but it consists of layer upon layer of Red Hat’s technology.

During a technical briefing panel, Red Hat’s product management head for Linux and virtualisation, Gunnar Hellekson described their container platform as being built on top of their enterprise-grade Linux operating system, RHEL or Red Hat Enterprise Linux.

In essence, OpenShift is a large-scale container infrastructure that is built on top of Linux and relies on significant Linux subsystems to be successful.

“The OS now is more than just kernel,” Hellekson said, while adding that people have become accustomed to treating the operating system as a portfolio of libraries, tools, applications, and dependencies.

“And more and more are looking to containerise these tools and applications and dependencies so that they can be consumed in containerised environments.”


A lot of container-first initiatives are popping up, according to Joe Fernandez, VP of Product for the Cloud Platform Business. He described it as a way to prepare an application to move to the cloud. “If an application isn’t ready to go to cloud, when you put it in container and enable it with Kubernetes, the cloud will just swap out the infrastructure, from a virtualised environment to run in bare metal, to running in public or private clouds.”

More importantly, it expands the types of applications you can think of running in the cloud, according to Fernandez; pertinently it addresses legacy and very traditional apps.

Fernandez also pointed out that Red Hat’s container platform OpenShift is not just Kubernetes. “(The platform) needs a solid foundation to run on, and you see a lot of innovation like the operating system, Red Hat CoreOS, Atomic, storage, networking and virtualisation.”

Kubernetes – conducting the orchestra

So, there are many compelling reasons for usage of containers, the first being low resource consumption compared to virtual machines (VMs); a minimum VM can be 1GB in size, and there is the overhead of an operating system that each VM has. Containers on the other hand, are stacked up on a shared operating system layer; the OS is shared and each application above that runs in its own sandbox.

Red Hat’s container platform, OpenShift also enables agile movement of resources and even workloads, across multiple environments be they virtualised, bare metal, public or private cloud.

VM-based applications are pretty monolithic and Glenn West, Red Hat’s Cloud Strategist and Principal Engineer, explained that when these apps are decomposed, most of them have multiple components and services inside. “In a container, you only see what is ‘yours’. This is foundationally at the container level.”

The purpose of decomposing VM-based apps is to containerise each component and service, so that they each can be upgraded independently.

“Now there may be more moving pieces when the app is decomposed, so how to orchestrate and put these pieces together? This is where Kubernetes comes in.  It is the way to describe all these pieces and put them together, on private networks, for example,” West said.

Software-defined networks (SDN) starts to become ultra-crucial here and it could work to ensure, for example, that databases are as isolated as possible from the app, and hence secured.

This is powerful stuff, but when there are hundreds of applications in the system and you want to control what app gets what component, physical networks can take up to six months to provision this. So, SDN is the way to go and Kubernetes will orchestrate all this across bare metal, VMs on premise or VMs in the cloud.

Kubevirt – more pieces to play with

West said, the more different pieces of technology there are, the more complex they become.  Kubevirt comes in when enterprises want to specify containers with Kubernetes as well as specify VMs as well; you can spin up virtual hosts to be containerised.

“This is something I’ve been advocating to be implemented for years! To have an integrated container and virtual-based infrastructure, so you can mix legacy and modern apps together.”

Kubevirt is an upstream community project around container-native virtualisation, which is slated to be productised within less than a year, to address the apps that haven’t been able to leverage the benefits of containerisation.

West pointed out also, “The interest in containers have been skyrocketing for years, but really we still have the reality of the legacy part that can’t run in it. Virtual and containers have had to run in their own separate siloes which incurs large overhead, but now I can have one integrated system with unified scheduling, orchestration, networking and administration, at large scale.”

The cloud strategist also spoke of a tagging feature for physical and virtual machine hosts that enables decision-making. For example, if app A and app B hold key value stores for each other, then you’d want all communications for the two apps, to be in the same host. Using tagging and Kubernetes, you can specify that they are ‘close’ together.

“But if one app is to audit the other, so it has to be on not the same host, and you can specify that they should be ‘far’ apart in Kubernetes.

He also drew attention to customers at Red Hat’s 2017 summit that had high percentage adoption of OpenShift but still had significant applications that could not be containerised. With Kubevirt, he explained that “the parts I can’t put into OpenShift, I now can with the powerful and consistent orchestration of Kubernetes.”

At this rate, it would also seem that legacy virtualisation vendors are slowly being edged out of being relevant.

There are no comments

Add yours