Containers and container based services such as Private Platform-as-a-Service solutions and Serverless platforms are quickly becoming technologies of choice for new projects across the industry. These Cloud Native applications set new requirements for IT infrastructure and operations. Even if the technologies are similar and well understood, the processes in many occasions will need attention to meet the requirements of the developers and businesses demanding faster time to market at lower cost.
The Datalounges container barometer 2020 -study showed interesting results regarding containerization of business applications, container based Serverless and private PaaS adoption. Most surveyed organizations had just started working with containers and were still defining the mandatory processes of how to build, run, protect and understand containerized applications.
Some conclusions that define how such IT services should look like, can however be made. Analysis of the data shows that:
- Agility of IT is the most important driver and managing containers comes second. Most surveyed companies said increasing agility of IT was a key benefit and the reason to choose containers as a form factor for new applications, but very few actually manage containers systematically.
- Even if containers often are built by inhouse developers, it is also quite common that partners and third party software vendors provide enterprise applications in containerized form factor. In all of these scenarios IT operations have limited possibilities to influence how containers are constructed and even less options to have changes made to containerized applications.
- Creating safe and secure applications is one thing, but having processes in place to build and run secure micro services in compliant infrastructures is another. Especially container orchestration capacity in Public Clouds can blur the security landscape and make it complicated to manage compliance accordingly. In many cases it appears that there is a significant gap between managing traditional virtual machine capacity and container orchestration capacity. Even if same rules apply, same policies are not implemented.
If we put aside developing secure code and maintaining those applications as the responsibility of the developer, the IT operations play an important role in infrastructure management regardless of what that infrastructure is and where it is run. So what should IT operations be looking to develop to support working with containers?
A good starting point is that even if containers are in principle well understood technology – they are not virtual machines and should not be considered as such. Containerized applications require IT organizations to adopt new practices to support container adoption.
Based on the study data Datalounges recommends to start by looking at how containers find their way to the infrastructure or if the organization is not yet working with containers, look into how containers might be used in the organization. Starting from adoption helps making decisions down the road when selecting how to handle containers coming from different sources.
Typical examples would be:
- Own Devs working for the organization and with company owned equipment inside the network build containers on their own laptops which then get delivered to container capacity somewhere from that laptop.An application project is bought from a third party developer who packages applications as containers on his own laptop and provides container images as results of consulting engagement.
- An automation is created which kicks off a workflow when changes are made to versioned code, result is packaged as a container using a container image and provisioned via private registry to production
- A third party software vendor provides a business application, which comes as a container and is given to IT as an appliance to run
- Developers use a private PaaS or Serverless technology which hides containerization and hides management processes inside the technology. Containers are kept inside the service and not visible to the rest of the IT fabric.
- There are numerous other possibilities but these few already highlight the challenges related to containers when it comes to IT operations
According to the study, most companies will have to support many such sources.
Security will be discussed in more detail in another post but what about security. How will security play with containers considering the different sources for containers. Certainly the requirements for security do not really change whether if the application is deployed in a containerized form factor, as a virtual machine or as something else. Logs need to be collected, services must be monitored, users managed and vulnerabilities fixed.
What does this have to do with the source of the container then? It is vital to consider where the containers come from because a decision has to be made if that source can be trusted. Also a policy for what to do with containers you did not package yourself is different from a policy of what to do with containers you do not know much about. As an example, if you are responsible for an environment that has compliance requirements such as a web shop with payment data or something that manages personal data, you need to consider how to manage well known vulnerabilities that affect the packages that were used for the container image and for the application.
A policy here would have to force containers to enter production only if such controls can be put in place so that a decision can be made what to do with containers that have such vulnerabilities.
For this reason it is valuable and important to consider how each of the container routes is managed and decide
- where containers coming from each source get put for production
- what will happen with them if they for some reason can not be trusted anymore.
Certainly the security challenge has several dimensions. Depending on your environment, only some of those dimensions you can control. For example the orchestration platform in Public Clouds or regional service provider environments is something you may just have to trust much like the virtualization layer provided for virtual machines.
If you are looking to build your own capacity for containers it definitely is important to pay attention to networking and maintenance of the Kubernetes platform, so that the components stay up to date and possible vulnerabilities are fixed. Kubernetes also provides a lot of opportunity to control how containerized applications interact so with planning and preparation, many of the challenges can be countered.
Many compliance and security related questions for containers and containerized applications are very interesting. As an example, if you do not know or trust the source of the container and specifically where it was built – can you run the container in the same Kubernetes cluster where you run your critical business applications. We already discussed tracking the package levels of containers but how are you planning to manage the lifecycle of containerized applications, especially if they are built by someone other than IT ops traditionally responsible for platforms. The actions are different if you have built the container and the vulnerability is part of a component you can update or if the container has come from a software vendor all developer partner.
Datalounges has created a model that helps manage container adoption and control how containers are consumed in different infrastructures.
Datalounges proposes a use of technologies and automations that help manage the container adoption and lets developers keep the freedom to quickly deliver applications. Kiubit DevSecOps is a solution which will allow developers to version software and provides a powerful pipeline that will build the containers based on supported and regularly updated container images. These will then be placed in a private registry that scans the packages for vulnerabilities and stops dangerous containers from being provisioned to managed container orchestration production. Datalounges strongly recommends using container images that come from a trusted source and are regularly updated. This forms the spine of container package maintenance.
To maintain much sought after agility, developers are allowed to automatically deliver application code to test clusters, but when containers are marked with production flags Ops teams would be needed for provisioning providing that Dev to Ops handover mechanic. Datalounges uses a powerful Ops tool to easily provision containers to any Kubernetes environment. Together these components create a DevMesh called Kiubit DevSecOps that can be easily deployed to any Kubernetes environment.
In addition to managing the adoption and then delivery to different container orchestration infrastructures, containers also need to be monitored and managed. For this purpose Datalounges proposes R4DAR, which allows users to monitor containers and infrastructure, collect logs and metrics and visualize all that data into a single pane of glass view of the cloud.
In addition to technology, Datalounges has set up methodologies to support container adoption efforts
Strategy development starts from planning the adoption routes and deciding policies. Then based on risks related to containers decisions can be made about how containers eventually are consumed and where each type of container is sent for production.
Containers are going mainstream and they will likely be a significant part of most Hybrid Cloud infrastructures. Planning and preparing for containers is a significant, but valuable effort. Datalounges would be privileged to share the insights from such projects and discuss with your organization how to build the containerization strategy and set up capacity at Datalounges, at your service provider partner or in the Public Cloud.
Kim Aaltonen | kim.aaltonen@datalounges.com
Kim looks after the business at Datalounges. Does not touch technology much, but more than 20 years of looking over other people’s shoulders has taught a lot about the infrastructure business. The other sales guy.