DevOps Blog

In-depth content about the most relevant Cloud industry topics, DevOps culture, tools and tutorials.

The Complete Overview of DevOps Cloud Native Tools Landscape

Written by Mantas Levinas
on September 25, 2019

Cloud native technologies are revolutionizing the way applications are delivered. They serve as an excellent complement to DevOps by providing the tools and platforms to enable automation and scalability. However, even for those in the industry, understanding what cloud native is (and isn’t) and navigating the entirety of the cloud native landscape can be a challenge.

Fortunately, the Cloud Native Computing Foundation (CNCF) is helping to standardize the cloud native space and making cloud native more accessible. Their Cloud Native Landscape interactive map provides us with a list of tools and services that enable cloud native computing. Here, we’ll take that a step further and break down each section of the current cloud native landscape. By the end of this piece, you should have a firm grasp of the different aspects of cloud native computing, as well as an understanding of many of the most popular tools to implement them.

 

What is cloud native and how does its

landscape look like?

One of the more admirable things the CNCF has done is give the term “cloud native” an authoritative definition. It should help us avoid much of the ambiguity surrounding other terms commonly used in the industry, such as the confusion around the differences between DevOps and Agile. To paraphrase the CNCF, cloud native tech enables organizations to build and run apps in a scalable and dynamic way. Technologies like containers, microservices, immutable infrastructure, and declarative APIs are important parts of cloud native.

The cloud native landscape is the list of tools, services, and platforms that make up the current cloud native ecosystem. Many popular DevOps tools can be found in the current list. They are broken up into categories based on the functionality they provide and the layer of the cloud native stack they reside in. As opposed to enumerating every tool in the list, in the sections that follow, we’ll explain each category and review some of the most popular tools available.

 

App definition and development

The app definition and development category of the cloud native landscape focuses on components directly involved in enabling application functionality, communication between microservices, application data storage, and image creation.

Databases

Databases have long been a fundamental aspect of application development. In a traditional LAMP/LEMP stack, the database needs to reliably, securely, and quickly enable CRUD operations for the app. Typically, some sort of DBMS (Database Management System) like MySQL, Postgres, or MariaDB was used.

With traditional databases, all the files and resources had to reside on the same host for proper functionality. It’s this requirement that created a challenge for cloud native computing. Databases for cloud native apps cannot have a single point of failure and needs data to be spread across multiple servers. It is something distributed databases do very well.

Distributed databases enable more resilient and modular application development in the cloud. Data is stored by replicating and duplicating it across multiple discrete servers. By doing so, cloud native developers can increase performance and durability of their systems.

Some of the most popular tools in this category are:

  • Apache Hadoop- Technically, Hadoop is the framework used for distributed processing while Cassandra is Apache’s distributed NoSQL database management system.
  • MySQL- You may be familiar with MySQL if you come from a background of spinning up LAMP/LEMP stacks for web applications. MySQL Cluster CGE is the organization's distributed database offering focused on enabling scalability and high availability (HA).
  • TiKV- This open-source distributed NoSQL database provides two key benefits: transactional APIs and ACID TiKV is incubated by the CNCF as well.

Streaming & Messaging

Microservices architecture is the cornerstone of both DevOps and cloud native computing. It allows different services to communicate with each other, maintaining data consistently and avoiding data corruption.

Streaming services and message brokers are the middleware of cloud native. They formalize the way distinct nodes communicate with each other in the microservices-enabled system. AMQP is one of the most commonly used protocols here, but there are others such as MQTT and STOMP.

Each service and protocol works slightly differently, but the underlying principle for messaging is the same: message or event “producers” send information to an intermediary (the broker) that makes delivers it to “consumers” or event receivers to act on the information. There’s a bit of debate on what does and does not constitute streaming, but in a nutshell streaming is simply the ability to do messaging at scale. For a deep dive into the topic of streaming and messaging, check out Roger Rea’s Streaming Analytics: To Stream or Not to Stream? article.

Popular cloud native streaming and messaging applications include:

  • RabbitMQ- A popular open-source message broker that supports a variety of messaging protocols.
  • Strimzi- Allows Kafka (Apache’s distributed message streaming platform) to run on a Kubernetes cluster.
  • Beam- A portable and extensible programming model that allows processing batches or streams of data and running them on a variety of execution engines. It supports popular programming languages like Java and Python.

Application Definition & Image Build

Creating application definitions (great explanation by Puppet here) and building images are a vital part of automating the development pipeline (which is a key component of DevOps). Reliable images ensure deployment processes can be scaled and reproduced, enabling automation and HA.

Services in this category make it easier to manage containers and clusters at scale. If you think about how containers and microservices work, the use case for these services begins to become clear. Microservices and containerization are all about breaking down large monolithic apps into smaller chunks. It is great for resilience and scalability, but it leaves you with many images that need to be configured, maintained, and ran. Application definition and image building tools make it simple to define multi-container applications. Using Docker Compose, as a specific example, you can spin up multiple Docker containers as a single service.

Here are some of the most popular cloud native tools in the application definition & image build subcategory:

  • Docker Compose- Containerization is one of the most important aspects of cloud native and Docker is the leader in the containerization market. Docker Compose is used to define multi-container Docker apps. Compose uses YAML files for configuration and allows instantiation with one command.
  • Helm­- Kubernetes can get complex at scale. Helm allows you to install, upgrade, and define (using YAML files) Kubernetes apps. While Helm may not be a household name yet, it is maintained by the CNCF and has the support of industry giants like Google and Microsoft. It is also quite popular on GitHub with 13,000+ stars as of this writing.
  • Packer- Packer makes it easy to automate the creation of machine images. It also comes with out of the box support for images compatible with a variety of platforms and is extensible. New platforms can be added to Packer by way of plugins.

Continuous Integration & Delivery

CI/CD is at the heart of DevOps processes. Building quality software quicker is what most DevOps teams that leverage cloud native tools strive for. We’ve covered CI/CD in-depth in our What is DevOps post, so I’ll only give a crash course here.

Continuous integration (CI) is all about making sure developers are working with the latest code build. The code in development cannot drift too far from the main branch, otherwise the hell will break loose. CI helps to ensure bugs, are addressed quickly and keeps the main branch stable, ready to be deployed at any time.

Continuous delivery (CD) is the next logical step after implementing continuous integration. CD automates code delivery pipeline by running different suites of automated tests and deploying your code to QA or staging environments. Some companies go further, by implementing continuous deployment. The only difference is that it automatically deploys your code to production after all the tests have passed.

Here are the most popular CI\CD tools that are listed in the cloud native landscape:

  • GitLab-GitLab is a GitHub alternative that is growing in popularity. The idea here is that you can minimize complexity by baking source code management, CI/CD, application monitoring and security into a single holistic solution.
  • Jenkins- The Jenkins automation server application is probably one of the first tools that come to mind when you think about CI\CD. Trusted by many DevOps teams, as well as being highly extensible and scalable, Jenkins is a popular choice for CI\CD in the world of cloud native.
  • Drone- Drone is a popular open-source continuous delivery system built on top of Docker. It uses YAML configuration files and is compatible with any environment that can run within a Docker container.

 

Orchestration & Management

Cloud native application built in a DevOps-driven organization is inherently scalable, automated and resilient. It means the tools used to orchestrate and manage cloud native apps must be able to work at scale and trigger automated workflows.

In a nutshell, this category of tools is all about making microservices work together in harmony. That’s what the tools in these categories help you do.

Scheduling & Orchestration

Deploying and managing a cluster of containers is one of the major operational tasks required to create a resilient, loosely coupled, and easily scalable cloud native application. Operating containers in a cluster also helps you enhance operational agility and mitigate the single point of failure problem: you no longer try to avoid failure, but rather prepare your application to handle it with minimal consequences.

If your application consists of just a few containers, tools in this category may be unnecessary. Scheduling and orchestration tools become valuable as cloud native applications scale. The benefit of these tools is that they abstract away the complexities of managing a large number of containers. As opposed to managing your application at the container-level, they provide you with a framework to deploy, rollback, load balance, and configure self-healing functionality at scale. For a more in-depth intro to the topic, check our Why you need Kubernetes and what can it do. While that piece is specific to Kubernetes, many of the principles can be generally applied.

There are a variety of options for cloud native container scheduling and orchestration, but the three major players are:

  • Kubernetes- Kubernetes or K8s is the most popular container orchestration solution. K8s started at Google and was released as an open-source tool in 2014
  • Docker Swarm- Docker is the leading container runtime, so it makes sense that their orchestration tool would be a popular and trusted option. Swarms consist of multiple Docker hosts that operate in a cluster as managers or workers.
  • Mesos- Mesos is Apache’s orchestration tool that abstracts away infrastructure resources and enables the creation of highly scalable distributed systems. Architecturally, Mesos is an abstraction layer that orchestrates CPU, memory and storage resources of physical or virtual machines.

Coordination & Service Discovery

There are thousands of stateless microservices in a cloud native application. Their lifecycle is unpredictable since microservices applications are built with auto-scaling and self-healing in mind. Coordination and service discovery tools enable the dynamic configuration and discovery of such microservices that are otherwise hard to handle. Using these tools can reduce redeployments and help with the implementation of stateless services.

The reason these services are useful for cloud native apps is that each discrete microservice may have a different impact on application performance and resource consumption. This requires services that are “aware” of one another. With coordination in place, failover, load balancing, and performance issues can also be addressed. For a practical example of the need for coordination and service discovery, check out the work Andy Redko did with Apache Zookeeper.

Some of the most popular coordination & service discovery tools are:

  • ZooKeeper- ZooKeeper is a tool from Apache. It is meant to maintain centralized configuration, naming, and grouping of your services. As a distributed system management tool, it is easily scalable and provides redundancy to your system.
  • Eureka- Netflix is a big contributor to the world of DevOps tools and cloud native computing. Eureka is an open-source service registry they developed to handle large-scale load balancing and failover.
  • etcd- etcd is an open-source key-value store that aims to provide cloud native services with configuration and services information in a simple, secure, fast, and reliable fashion.

Remote Procedure Call

Remote Procedure Calls (RPCs) allow applications to call a procedure that is not in the same address space. It is a core part of cloud native computing because it allows discrete microservices to communicate without residing on the same system.

RPCs are by no means new. Sun RPC a.k.a Open Network Computing Remote Procedure Call (ONC RPC) was originally developed in the 1980s. Similarly, Windows Systems having been using RPC for years. The reason RPCs are so attractive in cloud native world is that they enable language independent and low latency client/server communication. For more on the benefits of RPC, and gRPC in particular, check out The New Stack’s Google’s gRPC: A Lean and Mean Communication Protocol for Microservices.

Popular RPC tools include:

  • gRPC- gPRC is an RPC framework designed to run in almost any environment. It is one of the most popular development tools for communication between microservices. gPRC is used by major organizations like Netflix, Cisco, and Square.
  • Dubbo- This Apache offering is a high-performance Java RPC framework. In addition to RPC functionality, it offers intelligent load balancing, an extensible plugin design, automatic service registration, and runtime traffic routing.
  • Tars- This RPC framework is based on the name service and the Tars protocol. Today, it supports the Go, C++, NodeJs, PHP, and Java programming languages.

Service proxy

If you’ve ever had to troubleshoot an application, you’re likely familiar with the difficulty of drilling down to the core of the issue. Is it the app? Is it the network? Such approach may even lead to fingerpointing and make solving problems take longer. One of the core benefits of using a service proxy is being able to decouple network and application problems easily.

Drilling down a bit further, service proxies handle communication between instances in service mesh topologies. Not only do service proxy facilitates network communication, but it also improves observability and facilitates performance tweaks.

Some of the most popular cloud native service proxies are:

  • Envoy- Envoy was created by the Lyft team to serve as a distributed proxy, communication bus, and data plane. The project took inspiration from popular open-source projects like NGINX and HAproxy. It is now used by many respected names in the world of cloud native including SalesForce, Pinterest, AirBnb, and Uber. (were you a little surprised to see Uber using Lyft technology? Score one for the benefits of open source!).
  • Skipper- Open-source HTTP router and reverse proxy Skipper is designed to handle large amounts of HTTP routes dynamically. It can also serve as a Kubernetes Ingress controller.
  • NGINX- Much more than just a web server, NGINX provides robust proxy functionality and load balancing. It can also serve as an API gateway…

API gateway

APIs are an important part of enabling the scalability of cloud native apps. API gateways make API implementation easier by providing a single point of entry for API calls. It makes load balancing, security, and monitoring more effective and simple to manage.

Tyk does a good job of enumerating the benefits of API gateways to microservices in this piece. The short version is that API gateways streamline interservice communication, minimize attack surface, and enable service mocking to make testing easier.

There is a bit of an overlap between this and service proxy categories. Apart from NGINX and Envoy, two of the popular API gateways for cloud native apps are:

  • Kong- Bosting latencies of less than a millisecond in a world where speed is of the utmost importance, Kong is a very popular API gateway for connecting microservices.
  • Tyk- Trusted by giants like CapitalOne, Cisco, and Starbucks, Tyk is a popular API Gateway that boasts impressive performance stats. According to their site, on a simple virtual machine with 2 CPU cores and 2 GB of RAM, their API gateway can handle approximately 2,000 requests per second while keeping latency under 85 milliseconds.

Service Mesh

As cloud native services scale and the architecture becomes complex, a service mesh can help you handle network-based communication of your system more easily. This Red Hat article provides a good crash course on the topic, but in a nutshell: a service mesh is a network of proxies that takes the burden of communication off of microservices. These proxies are often referred to as “sidecars” because they are effectively attached to, but still discrete from, the microservices themselves.

Some of the most popular service mesh tools are:

  • Zuul- Another Netflix contribution to the cloud native ecosystem, Zuul is a gateway that provides functionality including monitoring, dynamic routing, and security.
  • Istio- One of the most popular service mesh projects on GitHub, Istio enables secure communication between services and provides the ability to apply policies to traffic. It also includes robust tracing and logging capabilities for all services.

 

Runtime

Even in the cloud, there’s always someone else’s computer behind the scenes. That means hardware resources like storage, compute, and networking must be somehow provided to the cloud native application. The basics are still the same, but in a cloud native world, you need to abstract away from the underlying hardware. Manual processes and tools that are tied to a given server or physical location become impractical with cloud native.

These tools and platforms are geared towards the distributed, hyper-scale, high-performance demands of cloud native.

Cloud native storage                

There are several big players in the cloud native storage space, including Amazon EBS (Elastic Block Store), Azure Disk Storage, and Dell EMC. However, we’d like to focus on open source solutions, since those are preferable in many cases and CNCF (rightfully) tends to have a cloud native bias.

Cloud native architecture places unique demands on the storage of data. Solutions must be scalable, resilient, and distributed. The ability to dynamically scale and automatically recover storage is a must at scale. Similarly, cloud native storage solutions should avoid platform lock-in and enable storage across multiple cloud service providers.

 Here are a few of the top open-source cloud native storage options:

  • Rook- Rook is a storage solution for K8s. It adds a layer of resilience, automation, and scalability to cloud native storage by focusing on self-healing, self-scaling, and self-management.
  • OpenEBS- OpenEBS is one of the biggest CAS (Container Attached Storage) solutions available. It is K8s native and runs completely in userspace. Two of the biggest benefits of OpenEBS are avoiding cloud platform lock-in and its quick & easy install process.
  • StorageOS- StorageOS is a platform that delivers persistent storage built to run on any infrastructure. With a clear emphasis on extensibility and usability, StorageOS boasts compatibility with “any orchestration”, “any application”, and “any infrastructure”.

Container runtime

“Container runtime” is an oft-misunderstood term. Container runtime is software that executes containers and manages container images on a node. As Daniel Walsh put it in his post on Opensource.com, container runtime manages container’s lifecycle by utilizing cgroups and namespaces of the kernel. It’s an overloaded term that is often confused with Docker itself.

Given that, you may be expecting this section to be just about Docker. After all, it was once a monolith application that, apart from many other things, also managed container runtime. From the first glance, Docker abstracts away kernel resources and builds logically separated containers on top of them. What else do we need? Alena Varkockova did a good job answering this question in her Medium piece. In short, as the container market matured, Docker application was separated into distinct microservices that now provide specific functionality (e.g. containerd).

So, beyond Docker and containerd, here are a few more container runtime projects worth noting:

  • Firecracker- Traditionally, the tradeoff between containers and virtual machines has been viewed as one between being lightweight & fast (containers) or logically isolated & theoretically more secure (virtual machines). Firecracker aims to solve that problem by offering workload isolation and the speed of containers in a single solution.
  • gVisor- gVisor helps layer security into Docker, containerd, and K8s. It helps serve as a lightweight container security tool for cloud native apps.

Cloud native network

Automation, scalability, and resilience are key aspects of cloud native networking. One of the big potential “gotchas” of cloud native networking is ensuring network resources are released and reallocated when a given container is destroyed. By planning your cloud native network strategy properly you can avoid such problems, and create a robust and scalable service.

Cloud native networking tools enable scalable and automatic allocation of network resources. This, in turn, enables orchestration and management of your network resources at scale.

A few popular cloud native network projects are:

  • Container Network Interface (CNI)- CNI is a project focused on plugins for network interfaces on Linux containers and the specifications and libraries for writing them.
  • Flannel- Flannel effectively enables subnet allocation and layer 3 networking for K8s. While this may seem simple, getting layer 3 right at scale can become complex. Flannel helps automate the process.
  • Cilium- Think about the myriad of challenges you could run into looking to use a traditional firewall like iptables with microservices. The visibility simply isn’t there. Cilium is an API-aware network security tool for Docker, K8s, and other container frameworks that helps address this issue.

 

Provisioning

Manual infrastructure provisioning doesn’t scale well. To effectively scale out your application, the underlying infrastructure must be defined in the configuration files that would reside in a version control system. Having this done enables you to treat your infrastructure as code and automate its provisioning. As a result, you have an auto-scaling cloud native application.

Each of these subcategories dives deep into enabling provisioning of containers and microservices in an automated and scalable fashion.

Automation & configuration management

Tribal knowledge and configurations details that only a few chosen team members know do not scale. Due to this reason, configuration management is the key DevOps process. By implementing IT operations practices like infrastructure as code, DevOps teams benefit from reproducibility, uniformity, and increased visibility in change management.

Cloud native automation and configuration tools allow configuration management at scale. Reproducible configuration files make it easy to deploy new servers with pre-defined ruleset or modify existing infrastructure at scale. Additionally, infrastructure as code helps to minimize differences between development and production environments, making your code run as expected once in production.

Some of the most popular configuration management tools include:

  • Ansible- Ansible greatly simplifies configuration management and significantly reduces infrastructure complexity. Ansible Playbooks make otherwise challenging administrative tasks easier to roll out at scale.
  • Puppet- Puppet is a holistic infrastructure automation and delivery tool. It enables automation of server configuration tasks, and Project Nebula is specifically targeted to cloud native apps.
  • Terraform- Terraform is an infrastructure as code tool, that allows you to create declarative configuration files where you can map services from multiple cloud vendors, and automate multi-cloud infrastructure provisioning.

Container registry

Container registries are simply repositories for container images.  A centralized and authoritative repository of container images helps build cloud native applications more rapidly. It allows you to quickly pull a known “good” image and use it as a building block in your application development. For many, a public registry like Docker Hub is sufficient. However, there are times where a more robust solution is required.

Companies use the container registry if they want to have more granular control over where the images are hosted. In simple terms, think of container registries as an on-prem or self-hosted version of Docker Hub.

Two popular container registries include:

  • Docker Registry- Docker recommends using Docker Registry if you want tight control over where your images are, you want control over your image pipeline, and you want to integrate image storage into in-house workflows.
  • Dragonfly- Dragonfly is an open-source file distribution system originally developed by Alibaba. It is built to be used with containers and K8s.

Security and compliance

“Security is everyone’s responsibility” is a common refrain in the world of IT, and this is particularly true when it comes to cloud native application development. Since cloud native often means “everything” resides in the public cloud, there is plenty of room for error that could prove costly. With the ever-present threat of data breaches and the increasing importance of compliance to regulations like GDPR (General Data Protection Regulation), maintaining a strong security posture and adhering to regulations is more important than ever.

There are a variety of apps focused on enhancing the security of cloud native apps and making the process of integrating security easier. Here are some of the most useful:

  • InSpec- InSpec is a compliance solution from Chef. InSpec enables “compliance as code” by transforming security and compliance requirements into tests that can be automated.
  • JetStack Cert-Manager- Certificate management can be complex even for smaller web services. At scale, it can become a real challenge. Cert-Manager is a certificate management controller for K8s that keeps certs up to date and attempts automatic renewal before expiration.
  • Clair- Vulnerability scanning is a big part of keeping web services secure. It holds true whether they are cloud native or not. Clair is a tool for static vulnerability analysis of Docker and appc

Key management

The principle of least privilege and zero trust are popular approaches to security that many infosec professionals tout. A single compromised key can lead to a catastrophic breach. Therefore, it is no wonder securing encryption keys is fundamental to sound information security and identity management. In a cloud native world, identity and access management tools should scale easily.

Here are some popular key and identity management solutions for cloud native projects:

  • KeyCloak- KeyCloak is an Identity and Access Management tool that allows you to secure apps with minimal code. As opposed to authenticating directly against a given app, users authenticate using KeyCloak. KeyCloak supports Kerberos, social login, user federation, SAML 2.0, and more.
  • ORY- ORY Hydra is an open-source Access Management tool that enables OAuth 2.0 and OpenID Connect. ORY Keto enables the creation of access management policies with granular rules based on specific attributes.

 

Observability and Analysis

Feedback loops are vital to continuous improvement. If you are not monitoring and analyzing each aspect of your application, you may miss out on key areas of improvement until they impact users. The right tool can help you detect problems early on and build more robust applications. Think about it this way: if there is a memory leak, how soon would you want to detect it?  

Additionally, DevOps tools allow you to monitor performance and trigger actions based on predefined conditions. It makes your system self-aware, being able to react to sudden changes more rapidly. If any given container is in poor health, proper monitoring can prevent significant performance impacts by quickly detecting and replacing broken containers.

Some of the more popular observability and analysis (or as I like to call it: monitoring) tools for cloud native are:

  • Nagios-Nagios has long been a major player in the on-prem monitoring world and has now effectively made the shift to cloud native. Nagios allows for centralized logging, alerting, monitoring, and notifications based on the data captured from your containers, K8s, and infrastructure.
  • Zabbix- Zabbix is a highly extensible open-source monitoring solution. It can monitor cloud services, applications, network resources, servers, and more. With over 300,000 installations, it is now clear that many organizations trust Zabbix, which has successfully rolled over into the world of cloud native and DevOps.

 

Closing thoughts

That was our walk through the cloud native landscape as it relates to the cloud native stack. In addition to what we covered here, the CNCF also calls out a variety of cloud native platforms to choose from to run your cloud native apps, a list of  Kubernetes Certified Service Providers and Training Partners, and serverless technologies. However, what we covered here is the focal point of cloud native: the cloud native stack. It is worth noting that you won’t necessarily need a tool from each category to deploy cloud native applications effectively, but as your business grows the right technology becomes increasingly important. If you are interested in learning more about DevOps and cloud native technology, subscribe to our blog or contact us today. We are passionate about DevOps and the democratization of high-performance computing, and we would love to help you identify the right solution for your workloads.

 

Subscribe to our blog

A Cost-effective Way to Handle Tomorrow's Performance-hungry Applications