placeholder

Cloud-native app delivery, where we’re heading in 2021

author

Alois Reitbauer

March 11, 2021

The application delivery tooling space has been maturing over the last couple of years. Traditional pipelines were in use for many years because they worked best with monolithic software, but then with the advent of microservices, we saw the birth of the DevOps movement, which introduced new practices that required a different set of tools with new capabilities. Continuous Integration and Delivery played a central role in how modern software was shipped and built. But early-stage continuous delivery pipelines and approaches are starting to reach their limits with significantly larger microservice-based applications delivered at a much faster pace.

Containers, Kubernetes, and cloud-native application development lead to the establishment of new practices: with continuous delivery moving towards progressive delivery. GitOps also introduced a new operations model driven by declarative software delivery models.

To keep up the pace with technology, you need to work on modernization whilst at the same time selecting new solutions in a fast-paced market.

This post is not intended to provide recommendations, but rather to show where the market is moving and offering food for thought for the selection of delivery technologies for your project.

First things first: what does cloud-native mean?

Everybody is talking about cloud-native these days and while it seems to have become a ubiquitous term in the industry, it means slightly different things to different people.

Luckily the Cloud Native Computing Foundation (CNCF) provides a very good and concise description of the concept:

… Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil….

Source: https://github.com/cncf/toc/blob/master/DEFINITION.md

Now that we have a common understanding of what cloud-native means, we can go on and look into what trends are currently shaping this space.

The decoupling of Continuous Integration and Continuous Delivery

Most people are talking about CI/CD pipelines as if the two need to be tightly linked. In a pre-container world, this was often true and the “d” was also more related to deployment than actual delivery.

This, however, massively changed with containers. Before containers became the primary means of shipping modern software, specific knowledge about how to run an artifact was required. This led to a tight coupling of the build and run phases of software delivery.

Containers now provide standardized artifacts, and Kubernetes offers a standardized way of running these artifacts. An operator can run a container more easily without understanding all the details of the technologies running. Helm charts are great examples of how to automatically install and configure entire software stacks without the need to understand them in detail.

A key indicator of this trend of separating CI from CD is that we are starting to look at who is responsible for these tasks, as they each require a different set of skills.

Whilst Continuous Integration requires a lot of technology-specific know-how on how to create artifacts for a specific technology (somebody who knows how to build a container with a Java application most likely cannot do the same for a Go application), specific technology knowledge is not required for deployment in a cloud-native world based on container images as standardized delivery units. However, different knowledge is required on how to manage blue/green deployments, test new releases, and manage operational procedures.

Declarative models are on the rise

Early-stage automation was mostly built following a procedural approach with tools like Ansible, Puppet, or Chef as well as traditional CI scripts for Jenkins and other tools. Kubernetes and the operator pattern have given rise to a more declarative model. People pass in manifests defining what the environment should look like via the API and controllers then take care of creating the proper resources and managing them.

The operator model also includes a reconcile loop which continuously checks whether the desired state — the state which was defined — matches the actual state. If the two states do not match, corrective measures are taken automatically. This makes deployments significantly easier and even allows the automation of simple operational tasks.

Declarative models can also be found outside of Kubernetes. AWS CloudFormation or ARM Templates for Azure provide a similar concept without the reconcile part.

Higher Level Abstractions based on Templates

Declarative definitions are great, but still require a lot of work. It’s common to hear complaints about the number of YAML files required for the configuration of cloud-native applications.

With additional platform capabilities like service meshes, security policies and even domain-specific custom resources, the amount of configuration becomes a heavy burden and quite complex. Additionally, each component requires specific knowledge to configure it properly, and that needs to be learned.

Kubernetes is designed as a platform on top of which domain-specific platforms can be built. Its great extensibility and flexibility provide almost infinite possibilities for customization. The resulting complexity of all the individual configuration items and their dependencies needs to be hidden by higher-level platforms that are built on top.

These platforms are slowly emerging. Last year at KubeCon, Bryan Liles spoke about the Rails moment for Kubernetes. This year, Lei Zhang talked about the Heroku moment for Kubernetes. The basic narrative is very similar: Introducing higher-level abstractions that make using Kubernetes easily accessible without first having to build a lot of prior knowledge.

Companies using Kubernetes at scale or at a certain maturity have started to provide internal templates for developers providing abstractions to deployments, service mesh configuration, ingress, policies, and other primitives. In some cases, developers only need to provide more or less only a container image.

I am confident that we will reach a level of abstraction allowing us to deploy services with a single line like

create myService as APIService using containerimage

while still being flexible enough to fine-tune configuration when needed.

This is an area that is still emerging but there are already a number of interesting projects in this space:

  • Sophisticated and powerful Helm templates can make configuration significantly easier. A good example is OneChart as part of gimlet.io which beyond other things provides a nice way to simplify the manifest writing process.
  • A more advanced approach is the Open Application Model (OAM) and KubeVela as the implementation.
  • Ketch is another example of an open-source project providing an opinionated approach to configuring Kubernetes resources

Standardized building blocks

As the industry is moving towards higher-level abstractions and the ecosystem matures, standardized building blocks start to emerge that aim to find common solutions to the same problems.

The Helm best practices are a great example of an emerging good practice. Another example is the ecosystem around the open policy agent which provides a lot of examples to be used for different technologies.

In the delivery space, we have not yet reached this level of agreement. Kelsey Hightower once said that “he cannot see any more demos of a canary deployment on Kubernetes”, but at the same time, different tools handle the definition of canary deployments quite differently.

While there are conceptual differences, many concepts are very similar which makes building blocks for application delivery a good candidate for standardization.

GitOps growing up and becoming more than kubectl apply of a git repo

The GitOps concept which was initially coined by Alexis Richardson from Weaveworks is becoming the de-facto standard for shipping cloud-native applications on Kubernetes.

As a concept, it borrows a lot from the operator and declarative patterns in Kubernetes which align the desired state of a system with the actual state using reconciliation. Recently the concept got a bit diluted by simply becoming a kubetcl apply on the contents of a GitRepo. This might be caused, amongst other reasons, by the simplicity of early tools in the spac which have limited reconciliation capabilities.

The new GitOps working within the CNCF will work on leveling up the concept to its full potential and also on improving collaboration across the industry.

There are already approaches of taking GitOps to the next level by not only providing manifests and templates as part of the process but also extending the artifacts with SLI and SLO definitions, which are then validated as part of the remediation process. The Keptn project uses this as a central concept of its GitOps implementation.

Deeper tie-in of operational models

The majority of application delivery practices are focused on stopping application delivery when the application runs successfully in production (or is rolled back due to problems). With progressive delivery as coined by James Governor, developers and SRE teams have powerful and flexible tools at hand to ship software safely and at a high velocity.

Operational support via automated runbooks that are shipped alongside the application is not yet common practice. The operator capability model has some initial thoughts on this topic, however, these concepts are not widely adopted.

Delivering operational instructions more frequently along with releases is a challenge that needs more attention as I described in an earlier blog post. Operational problems are also becoming more complex in a microservice environment compared to a simple(r) multi-tiered application.

More building blocks, less solutions

There is a trend already, in which projects are increasingly trying to cover all aspects of application delivery by adding additional capabilities versus doing one thing — and only one thing — very well. Just look at Argo, which is coming as a whole set of tools forming a solution, and also Flux, which added Flagger to the main project.

Many of the components or tools can be used independently as well. However, the goal is to create bigger solutions that have dependencies on each other.

At the same time, user adoption patterns look different. People often start with a specific capability and then gradually add new capabilities. Often different teams also try different solutions and then the best solution gets adopted more widely.

People want to mix and match the specific tools they prefer. This requires more interoperable building blocks which can easily be combined across different tooling stacks. The Tekton project also has an interesting approach with the Tekton Hub to provide standardized tasks for delivery.

End-to-end integration of tooling

Practitioners like to mix and match solutions rather than using every single tool from a suite. In a modern polyglot environment, tooling requirements from different teams also requires more flexibility as it becomes hard to find a single tool that fits everybody’s needs.

Even with tooling suites, one project or provider will never offer all the tooling needed to build an end-to-end toolchain. Even with a continuous delivery tool, you will still need a load testing tool, a chaos engineering tool, chatops tooling, issue tracking etc.

Integrating these tools today is often a lot of work as you have to deal with a lot of proprietary APIs and integrations are hard to test. This massively increases the time to value and makes it incredibly hard to exchange tools when needs or requirements change.

The CDF interoperability SIG is already working on getting some of these interoperability standards off the ground. The Keptn project has started to define a first version of a much wider set of interoperable definitions, covering:

  • The definition of delivery and operation automation flow
  • Standardized integrations for tools
  • Lifecycle events/commands to be consumed across tools
  • Monitoring Tool independent definitions of SLIs and SLOs

Security as a core building block

Recent news has shown that security management and control is a key part of software delivery practices. Container scanning and automated vulnerability detection will become widely adopted and a standard practice. Tools like Trivi are open source and can easily be integrated into modern CI systems. Additionally, container images need to support higher standards like building from proper tags and being rootless and distroless.

Projects like cloud-native buildpacks can also help to build more secure applications. While not a security tool per se, they standardize and simplify the creation of container images encapsulating a lot of this knowledge in a way that is transparent to the individual developer.

Conclusion

The adoption of cloud-native is driving a lot of innovation in the continuous delivery space.

Different developments are at different stages of maturity. What is obvious, though, is that it is time to level up the early-stage CI/CD approach to a more modern mindset.

Based on those aforementioned trends, these are the key initiatives that should be on your modernization agenda:

  • Decouple continuous delivery from continuous integration
  • Move towards a declarative model and avoid custom scripts where possible
  • Standardize and provide templates for scalability and faster adoption
  • Build automation beyond deployments for operations automation
  • Integrate security

For further information on this topic, I can only but recommend this article by the Continuous Delivery Foundation: 2021 Technology Trends and Predictions. You will see quite some overlap with the topics discussed in this blog post. This can be seen as an indicator that the industry as a whole is working on a common set of challenges and topics and heading in the same direction.

Last but not least I want to thank Tracy Miranda and Lei Zheng for their valuable feedback.


Cloud-native app delivery, where we’re heading in 2021 was originally published in Dynatrace Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

Written by

author

Alois Reitbauer