Nvidia A100 GPU with AMD EPYC 7402P Pre-order now

The Anatomy of DevOps Process Flow – Close the Loop to Succeed

September 11th, 2019
The Anatomy of DevOps Process Flow – Close the Loop to Succeed

Business is a holistic system. Its core components are tightly coupled, and their interactions help us create the desired outcome and meet a specific market need. This value creation chain is a process that has to be well-defined, properly executed, and neatly measured to achieve repeatable value delivery. When your business has well-managed processes in place, you can start implementing continual improvements in service quality, delivery time, or production costs. It is clear as day that mature processes are mandatory for steady business growth. It is what ensures clarity.

DevOps business processes are implemented in a way to deliver new features, bug fixes, and system enhancements to production as quick as possible. Similar to Lean and Agile practices, the DevOps process flow seeks to eliminate wasteful practices and increase value-producing activities. The promise of building quality software quicker is so appealing that it is being adopted rapidly both in startups and in old-school corporations. According to the state of DevOps report by Puppet, companies that have adopted DevOps methodology benefit from 46x more frequent code deployment, 440x shorter commit-release cycle and 96x faster time to recovery. Such performance is achieved by employing a set of DevOps best practices that reduce re-work and remove overhead from the software development pipeline.

While a healthy DevOps culture lays out the foundation for DevOps adoption, well-defined business processes ensure its success. As I‘ve put it earlier, DevOps is an intersection of culture, processes and tools. So let‘s now examine what the stages of the DevOps process flow are and how do they help businesses develop breathtaking software.

Continuous planning - the smaller, the better

Forget long-term business strategies that take months to be created and are often severely aimed. Continuous business planning employs the best practices of Lean and Agile methodologies to make your DevOps process flow as smooth as ever. The main idea is to plan software development in short iterations to reduce waste and create a product that your customers would crave.

As Eric Ries stated in his book The Lean Startup, every new business venture must start with having its core value statement defined – a promise that should drive your customers crazy, make them fall in love with you and ensure everyone a happy ending. Of course, at first, you don‘t know for sure what your target market wants, so you roll up your sleeves, build a minimum viable product and preach it to the early adopters. The goal is to test your idea in production as quick as possible and get raw user feedback, which helps you make data-driven decisions about further development. You move in short iterations releasing new features, bug fixes, and system enhancements. Each release is closely monitored, the data gets analyzed, and your plan is modified accordingly. This way, you are less likely to go astray and can reach a product-market fit more quickly.

Continuous integration - set the stage

Back in the old days, code integration was a long and tedious process. The longer your team worked on a code build, the more painful it had become. Imagine the day when after months of coding, a dozen developers are trying to integrate their code into a single piece of software. Different code branches collide, bug fixing becomes obscure, and your project timeline extends obscenely. By now, you have probably missed your project deadline, and your team is demoralized. That’s integration hell.

Continuous integration helps you get to the integration heaven, making software integration a trivial task – you no longer think much of it. How does it work? The mainline of your code lies in the version control system. Before you start working, you make a local code copy from the repository. You then make changes to the production code and automated tests – continuous integration assumes that the lion‘s share of your code is covered by tests. After you finish your work, you create a local code build that gets tested automatically. If the local tests pass, you are allowed to commit your code to the mainline code flow. Then a new code build is created on the continuous integration server where automated tests are rerun to detect potential artifacts that may be caused by the developer‘s local environment. If these tests pass, then and only then your work is done. If things go wrong, fixing the broken build becomes the highest priority task, which, if you have implemented DevOps processes correctly, shouldn‘t take more than 10 minutes. This way, your team is confident that everybody is working with the latest code build.

Continuous integration augments continuous business planning and extends your DevOps process flow. You start with planning small, code in tiny batches – typically no bigger than 20 lines of code – and build on every code commit. This way you know if your code compiles and the initial tests pass. And when they don‘t, you spend much less time debugging the problem, since you have made just so many changes to the system. The team never gets too far from the stable code base and becomes capable of employing the next step of the DevOps process flow.

Continuous delivery - spread chunks of value daily

Ever since the Agile manifesto, there was a notion of developing software in small batches, but the deployment pipeline still lacked efficiency. Continuous delivery is the next stage of the DevOps process flow, whose primary purpose is to optimize the throughput of the deployment pipeline. While the deployment pipeline starts with continuous integration, there is more to it.

The main idea of continuous delivery is to get fast, automated feedback on the production readiness of your software, every time you make a code commit. Continuous delivery ensures that new features, configuration changes, bug fixes, and experiments flow through your deployment pipeline safely and quickly in a sustainable way. Comprehensive tests take time to complete, yet you still want to make low-risk releases. So how do you balance this?

Continuous delivery goes further than merely compiling and unit testing your code. Typically, you would put more protracted and more expensive tests further away in your deployment pipeline. As a rule of thumb, these tests are also less likely to fail. The final testing suite would depend on the complexity and maturity of the system, which may require additional tests. Later stages of the deployment pipeline may include integration, load, UI, and penetration tests to prevent any performance, usability, and security issues. If the system requires thorough testing, additional tests can be executed on separate machines in parallel. This way, you get continuous feedback about the quality of your code as fast as possible, without lowering the pace of software delivery.

Depending on the maturity of your DevOps processes, continuous delivery may extend into continuous deployment. The only difference here is that with the former you keep your mainline code ready to be released at any time, while with the latter you deploy to production automatically with the condition that all the tests have passed. This way, your DevOps process flow becomes even more agile. Continuous deployment may not be suitable for financial or mission-critical applications that require extensive testing and manual intervention. If that’s not the case for you, start your DevOps journey with continuous delivery and move to continuous deployment when your DevOps processes have matured.

Continuous operations - scale out to the world

Now we step into the world of IT operations. Here, downtime is not an option. The performance of your system cannot suffer from ongoing releases, updates, and patches which are so frequent when you adopt the DevOps process flow. Continuous operations work hand in hand with continuous monitoring to bring peace of mind for agile businesses that are building software at scale.

Continuous in operations, just like everything else DevOps, start with a centralized version control system. Everything lives in it, including your code, your database schema, your server configuration files, and anything beyond that. The main idea here is to have reproducible builds of your system: if you took a virgin machine, you would be able to recreate your system out of the box. Having all your infrastructure changes logged also helps with compliance and auditing purposes. Using version control for IT operations is also the first step to abstracting away from infrastructure and treating your hardware as code.

Infrastructure as code is an IT resource management approach that defines compute, storage, and network infrastructure through source code, more precisely – configuration definition files. In conjunction with cloud computing, infrastructure as code augments the DevOps process flow by order of magnitude. Infrastructure costs shrink due to a pay-per-use pricing model, deployment speed increases because of the on-demand provisioning, and infrastructure misconfiguration risks diminish thanks to automated configuration management. Even more, infrastructure as code changes the way you think about IT operations. Instead of architecting to last, you build to fail.

DevOps fits neatly with modern cloud-native applications, making microservices architecture the primary way to run your software at scale. As soon as you outgrow the minimal viable product phase, you shatter that monolith application into loosely coupled services and run them in isolation. By doing so, you create a highly available system that is prepared to fail. If one of your services breaks down, you kill it and spin up a new one. If a load of your system increases, you merely deploy new nodes for the specific service. And if you are deploying to production daily – which you should – microservices architecture and the right DevOps tools can facilitate your deployment strategy. It becomes quicker and easier to introduce system changes gradually through canary releases or blue-green deployments.

Continuous monitoring - "with a thousand eyes, and one"

Let’s assume you have already implemented DevOps processes that we discussed so far. Software flows swiftly through your deployment pipeline with code integration, testing, and deployment fully automated. You are swiftly releasing new software and deploying it to production, but now you need to make sure your new releases do not cause any performance degradation. Most bugs should have been caught by now, but some artifacts are hard or impractical to test in a staging environment. Thus, testing shifts to production and extends through continuous monitoring. So what exactly is being monitored? The data collected via continuous monitoring can be divided into primary and secondary metrics.

Primary metrics are set to evaluate application performance that is experienced by the end-users. First, you may want to monitor end-user experience directly – through network port mirroring (passively) or synthetic probes (actively) – to capture latency issues and system inconsistencies as the user interacts with your application. Next, you want to monitor business transactions across infrastructure tiers to make sure you are meeting your SLA. Finally, you need to have system reports that would consist of a standard set of metrics for each application. These reports allow you to evaluate the performance of the whole system, despite cross-application differences. Primary metrics matter the most since they help you understand your system as a whole and how your customers experience it.

Secondary metrics assess computational resources of the system to ensure there is enough resource capacity to handle the load, and to identify any bottlenecks of the system. Runtime application architecture monitoring – that is based on application discovery and dependency mapping – helps you better understand your system topology, service dependencies, and impacts of your changes. In addition to this, you may also want to feel the pulse of your middleware through deep-dive component monitoring. Secondary metrics are essential to manage your system and improve its topology.

These two sets of performance metrics are closely monitored to collect the data, understand it, identify trends, and eventually take data-driven actions. By continuously monitoring your users, systems, and network, you detect and contain any incidents. Then you respond by remediating the issues, making a retrospective analysis and applying necessary policy changes. It then allows you to predict future threats and take necessary preventative actions to harden your system. Ultimately, automation strengthens these continuous monitoring stages and enables complex if-then rules to make your system self-aware. This way, you can manage your system more efficiently, self-scale the underlying infrastructure, and make informed business decisions.

Continuous monitoring closes the loop of the DevOps process flow, giving your team feedback about its development efforts. Just like everything else in DevOps, this feedback has to be taken in small sips to help you maintain pace and allow continuous planning of your development tasks.

On the final note

It may take time and effort to adopt DevOps culture and implement DevOps processes in your organization, but the benefits are worth the effort. You accelerate innovation, increase efficiency, reduce failures, and enhance the job satisfaction of your IT team. There is no magic recipe to adopting DevOps – it is a journey, and, like every other journey, it starts with a small step forward.

Mantas is a hands-on growth marketer with expertise in Linux, Ansible, Python, Git, Docker, dbt, PostgreSQL, Power BI, analytics engineering, and technical writing. With more than seven years of experience in a fast-paced Cloud Computing market, Mantas is responsible for creating and implementing data-driven growth marketing strategies concerning PPC, SEO, email, and affiliate marketing initiatives in the company. In addition to business expertise, Mantas also has hands-on experience working with cloud-native and analytics engineering technologies. He is also an expert in authoring topics like Ubuntu, Ansible, Docker, GPU computing, and other DevOps-related technologies. Mantas received his B.Sc. in Psychology from Vilnius University and resides in Siauliai, Lithuania.

Cloud VPS - Cheaper Each Month

Start with $9.99 and pay $0.5 less until your price reaches $6 / month.

We use cookies to ensure seamless user experience for our website. Required cookies - technical, functional and analytical - are set automatically. Please accept the use of targeted cookies to ensure the best marketing experience for your user journey. You may revoke your consent at any time through our Cookie Policy.
build: 9307d02f.596