Chapter 4: How Does DevOps “Work”?
Like all cultures, DevOps incorporates many variations on the theme. However, most observers would agree that the following capabilities are common to virtually all DevOps cultures: collaboration, automation, continuous integration, continuous delivery, continuous testing, continuous monitoring, and rapid remediation.
Instead of pointing fingers at each other, development and IT operations work together (no, really). While the disconnect between these two groups was the impetus for its creation, DevOps extends far beyond the IT organization, because the need for collaboration extends to everyone with a stake in the delivery of software (not just between dev and ops, but all teams, including test, product management, and executives):
“The foundation of DevOps success is how well teams and individuals collaborate across the enterprise to get things done more rapidly, efficiently and effectively.”
—Tony Bradley, “Scaling Collaboration in DevOps,” DevOps.com
DevOps relies heavily on automation—and that means you need tools. Tools you build. Tools you buy. Open source tools. Proprietary tools. And those tools are not just scattered around the lab willy-nilly: DevOps relies on toolchains to automate large parts of the end-to-end software development and deployment process.
Caveat: Because DevOps tools are so amazingly awesome, there’s a tendency to see DevOps as just a collection of tools. While it’s true that DevOps relies on tools, DevOps is much more than that.
You usually find continuous integration in DevOps cultures because DevOps emerged from Agile culture, and continuous integration is a fundamental tenet of the Agile approach:
“A cornerstone of DevOps is continuous integration (CI), a technique designed and named by Grady Booch that continually merges source code updates from all developers on a team into a shared mainline. This continual merging prevents a developer’s local copy of a software project from drifting too far afield as new code is added by others, avoiding catastrophic merge conflicts.”
—Aaron Cois, “Continuous Integration in DevOps,” DevOps blog, Software Engineering Institute, Carnegie Mellon
The continuous integration principle of agile development has a cultural implication for the development group. Forcing developers to integrate their work with other developers’ work frequently—at least daily—exposes integration issues and conflicts much earlier than is the case with waterfall development. However, to achieve this benefit, developers have to communicate with each other much more frequently—a process that runs counter to the image of the solitary genius coder working for weeks or months on a module before she is “ready” to send it out in the world. That seed of open, frequent communication blooms in DevOps.
The testing piece of DevOps is easy to overlook—until you get burned. As Gartner puts it, “Given the rising cost and impact of software failures, you can’t afford to unleash a release that could disrupt the existing user experience or introduce new features that expose the organization to new security, reliability, or compliance risks.” 7 While continuous integration and delivery get the lion’s share of the coverage, continuous testing is quietly finding its place as a critical piece of DevOps.
Continuous testing is not just a QA function; in fact, it starts in the development environment. The days are over when developers could simply throw the code over the wall to QA and say, “Have at it.” In a DevOps environment, quality is everyone’s job. Developers build quality into the code and provide test data sets. QA engineers configure automation test cases and the testing environment.
On the QA side, the big need is speed. After all, if the QA cycle takes days and weeks, you’re right back into a long, drawn out waterfall kind of schedule. Test engineers meet the challenge of quick turnaround not only by automating much of the test process but also redefining test methodologies:
“Continuous testing creates a central system of decision that helps you assess the business risk each application presents to your organization. Applied consistently, it guides development teams to meet business expectations and provides managers visibility to make informed trade-off decisions in order to optimize the business value of a release candidate.”
—Continuous Testing for IT Leaders, Parasoft
Although it may come as a surprise, the operations function has an important role to play in testing and QA. Operations can ensure that monitoring tools are in place and test environments are properly configured. They can participate in functional, load, stress, and leak tests and offer analysis based on their experience with similar applications running in production.
The payoff from continuous testing is well worth the effort. The test function in a DevOps environment helps developers to balance quality and speed. Using automated tools reduces the cost of testing and allows test engineers to leverage their time more effectively. Most important, continuous testing shortens test cycles by allowing integration testing earlier in the process.
Continuous testing also eliminates testing bottlenecks through virtualized, dependent services, and it simplifies the creation of virtualized test environments that can be easily deployed, shared, and updated as systems change. These capabilities reduce the cost of provisioning and maintaining test environments, and they shorten test cycle times by allowing integration testing earlier in the life cycle.
The team at Amazon Web Services defines continuous delivery as a DevOps “software development practice where code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has passed through a standardized test process.”
The actual release frequency can vary greatly depending on the company’s legacy and goals. High-performing organizations using DevOps achieve multiple deployments per day compared to medium performers who release between once per week and once per month.
Exactly what gets released varies as well. In some organizations, QA and operations triage potential releases: many go directly to users, some go back to development, and a few simply are not deployed at all. Other companies push everything that comes from developers out to users and count on real-time monitoring and rapid remediation to minimize the impact of the rare failure. And it’s important to note that because each update is smaller, the chance of any one of them causing a failure is significantly reduced.
Given the sheer number of releases in a continuous delivery shop, there’s no way to implement the kind of rigorous pre-release testing typically required in waterfall development approaches. In a DevOps environment, failures must be found and fixed in real time. How do you do that? A big part is continuous monitoring.
With continuous monitoring, teams measure the performance and availability of software to improve stability. Continuous monitoring helps identify root causes of issues quickly to proactively prevent outages and minimize user issues. Some monitoring experts even advocate that the definition of a service must include monitoring—they see it as integral to service delivery.
Like testing, monitoring starts in development. The same tools that monitor the production environment can be employed in development to spot performance problems before they hit production.
Two kinds of monitoring are required for DevOps: server monitoring and application performance monitoring. Monitoring discussions quickly get down to tools discussions, because there is no effective monitoring without the proper tools. For a list of DevOps tools (and more DevOps-related content), visit the New Relic DevOps Hub.