Shakedown Cruise

Shakeout Testing With Synthetics: A TEM Best Practice

Shakeout Testing With Synthetics: A Test Environment Management Best Practice

Testing software is pretty easy, right?  You build it to do a thing.  Then you run a test to see if it does that thing.  From there, it's just an uneventful push-button deploy and the only unanswered question is where you're going to spend your performance bonus for finishing ahead of schedule.

Wait.  Is that not how it normally goes in the enterprise?  You mean testing software is actually sort of complicated?

Enterprise-Grade Software Necessarily Fragments the Testing Strategy

I've just described software testing reduced to its most basic and broad mandate.  This is how anyone first encounters software testing, perhaps in a university CS program or a coding bootcamp.  You write "Hello World" and then execute the world's most basic QA script without even realizing that's what you're doing.

  1. Run HelloWorld.
  2. If it prints "Hello World," then pass.
  3. Else, fail.

That's the simplest it ever gets, however.  Even a week later in your education, this will be harder.   By the time you're a seasoned veteran of the enterprise, your actual testing strategy looks nothing like this.

How could it?

Your application is dozens of people working on millions of lines of code for tens of thousands of users.  So if someone asked you "does it do what you programmed it to do," you'd start asking questions.  For which users?  In which timezone and on what hardware?  In what language and under which configuration?  You get the idea.

To address this complexity, the testing strategy becomes specialized.

  1. Developers write automated unit tests and integration tests.
  2. The QA department executes regression tests according to a script, and performs exploratory testing.
  3. The group has automated load tests, stress tests, and other performance evaluations.
  4. You've collaborated with the business on a series of sign-off or acceptance tests.
  5. The security folks do threat modeling and pen testing.

In the end, you have an awful lot of people doing an awful lot of stuff ahead of a release to see if the software not only does what you want it to, but also to see how it responds to adversity.

Sometimes the Most Obvious Part Gets Lost in the Shuffle

But somehow, in spite of all of this sophistication, application development organizations and programs can develop a blind spot.  Let me come back to that in a moment, though.  First, I want to talk about ships.

A massive ship is a notoriously hard thing to test.  Unlike your family car, it seems unlikely that someone is going to put it up on a lift and run it through some simulated paces.  And, while you could test all of its various components outside of the ship or while the ship is docked, that seems insufficient as well.

So you do something profound and, in retrospect, obvious.

You take it out on what's known as a shakedown cruise.  This involves taking an actual ship out into the actual sea with an actual crew, and seeing how it actually performs in an abbreviated simulation of its normal duties.  You test whether the ship is seaworthy by trying it out and seeing.  Does it do what it was built to do?

In the world of software, we have a similar style of test.  All of the other testing that I mentioned above is specialized, specific, and predictive of performance.  But a shakeout test is observational and designed to answer the question, "how will this software behave when asked to do what it's supposed to do."

And it's amazing how often organizations overlook this technique.

Shakeout Testing Is Important

Shakeout testing serves some critical functions for any environment to which you deploy it.  First and foremost, it offers a sanity test.  You've just pushed a new version of your site.  Is the normal stuff working, or have you deployed something that breaks critical path functionality?  It answers the question, "do you need to do an emergency roll-back?"

But beyond that, it also helps you prioritize behavioral components of your system.  If your shakeout testing all passes, but users report intermittent problems or lower priority cosmetic defects, you can make more informed decisions about prioritizing remediation (or not).  The shakeout test, done right, tells you what's important and what isn't.

And, finally, it provides a baseline against which you can continuously evaluate performance.  Do things seem to be slowing down?  Is runtime behavior getting wonky?  Re-run your shakeout testing and see if the results look a lot different.

Shakeout testing is your window into an environment's current, actual behavior.

Shakeout Testing Is Labor-Intensive, Especially With a Sophisticated Deployment Pipeline

Now, all of this sounds great, but understand that it comes with a cost.  Of course it does -- as the saying goes, "there is no free lunch."

Shakeout testing is generally labor intensive, especially if you're going to be comprehensive about it.  Imagine it for even a relatively simple and straightforward scenario like managing a bank account.  Sure, you need to know if you can log in, check your balance, and such.  But you're probably going to need to check this across all different sorts of bank accounts, each of which might have different features or workflows.

It quickly goes from needing to log in and poke around with an account to needing dozens of people to log in and poke around with their accounts.  Oh, and in an environment like prod, you probably want to do as much of this in parallel as possible, so maybe that's hundreds or even thousands of man-hours.

This becomes time-consuming and expensive, with a lot of potential ROI for making it more efficient.

Low-Code, No-Code, and Synthetics: Helping Yourself

As detailed in the article above, a natural next step is to automate the shakeout testing.  In fact, that's pretty much table stakes for implementing the practice these days.  The standard way to do this would involve writing a bunch of scripts or application code to put your system through the paces.

This is certainly an improvement.  You go from the impractical situation of needing an army of data entry people for each shakeout test run to needing a platoon of scripters who can work prior to deployment.  This makes the effort more effective and more affordable.

But there's still a lot of cost associated with this approach.  As you may have noticed, it isn't cheap to pay people to write and maintain code.

This is where the idea of low-code/no-code synthetics solutions come into play.  It is actually possible to automate health checks for your system's underlying components that eliminate the need for a lot of your shakeout testing's end to end scripting.

You can have your sanity checks and your fit for purpose tests in any environment without brute force labor or brittle automation.

Shakeout Testing Has a Maturity Spectrum

If you don't yet do any shakeout testing, then you should start.  If you haven't yet automated it, then you should automate it.  And if you haven't yet moved away from code-intensive approaches to automation, you should do that.

But wherever you are on this spectrum, you should actively seek to move along it.

It is critically important to have an entire arsenal of tests that you run against your software as you develop and deploy it.  It's irresponsible not to have these be both specialized and extensive.  But as you do this, it's all too easy to lose track of the most basic kind of testing the I mentioned in the lead in to the post.  Does the software do what we built it to do?  And the more frequently and efficiently you can answer that, the better.

Contributor: Erik Dietrich

Erik Dietrich is a veteran of the software world and has occupied just about every position in it: developer, architect, manager, CIO, and, eventually, independent management and strategy consultant.  This breadth of experience has allowed him to speak to all industry personas and to write several books and countless blog posts on dozens of sites.

OMG DevOps at Scale

The Keys to DevOps at Scale

"An organization that’s operating at scale can grow to meet greater demand without too much hassle."

When it comes to DevOps, it's important to know where organizations generally fall short, but every organization is different. We have to identify where there's waste and what inefficiencies prevent you from delivering software rapidly, consistently, and securely. In this post, we'll cover some keys to DevOps at scale so that you can make your DevOps initiative work in a big organization.

 

Set the Foundation to Create a Great Culture

People are at the heart of every DevOps initiative. Making sure they're effectively communicating is the first key to scaled DevOps.

In big organizations, people are used to delivering software in certain ways. These people aren't known for changing process and technologies often. That's because as a company grows, coordination and communication get more complicated.

A key thing then is to change how people work together to deliver software. And I'm not just talking about developers and operations. Marketing, product owners, managers, testing, and especially senior management needs to understand what DevOps means and how their work will be impacted. These people must be engaged in the DevOps journey.

You can start by doing workshops to discover waste and inefficiencies. Then you'd define the initial action items for the first sprint of many iterations that you'll need to do to increase efficiency. AWS developed a Cloud Adoption Framework (CAF) to help organizations get on board with the cloud. I happen to find CAF helpful for the things that also have to adopt DevOps.

Your team should make the effort to agree on how to better work together. They don't need to be best friends. But when they know each other and understand the needs of other teams, then they can find a balance that works for everyone.

Laying the foundation for people to collaborate is the hard part. Luckily, we're about to talk about the easiest problems to solve—technology problems.

Decouple Architecture to Deploy Frequently

When an organization is spending too much time fixing and debugging problems and they don't have time to reduce the technical debt, then applying DevOps might add more complexity.

Architecture is the next key to DevOps at scale. That's because big companies usually have a ton of interconnected systems where if you try to change something, you might break something else. Tightly coupled architectures need the coordination of many people to release software.

To support this section, I'll refer to what I initially heard from Jez Humble in a podcast about architecture in continuous delivery (CD). Testing and deployment are the main focuses of CD architecture. More specifically, it's important to ask these two questions:

  • Is it possible to do testing without requiring an integrated environment?
  • Is it possible to do releases independently of other systems or services?

Decoupled architectures give you the independence to test software without needing to install and configure other parts of the system. Instead, testing is done using mock objects because there's already a contract.

Deployments or releases can be done without having to update other applications. You're not breaking any agreed-upon contract, like response formats. It's also going to be possible to release frequently and in small batches—the two essential features of CD or DevOps.

Solidify Engineering Practices for Releases

The next key to DevOps at scale is to solidify your engineering practices.

The first time I heard about how Google runs production systems was when I read the Site Reliability Engineering (SRE) book. I loved how Ben Treynor defined SRE as "what happens when a software engineer is tasked with what used to be called operations."

A software engineer gets bored easily. Well, maybe that's true of everyone in tech. But the field that changes constantly is the one for programmers. DevOps actually started as a concept of agile infrastructure, where (predictably) some of the agile principles were applied to infrastructure.

As I said before, technology problems are the easiest ones to solve. That's because there are so many tools and practices that were born with the purpose of automating work. In the tools department, we have Jenkins, VSTS, Puppet, Chef, Ansible, Salt, and many more. But more importantly, there are practices like infrastructure as code, production-like environments, canary releases, blue/green deployments, and the most controversial one: trunk-based development.

Everything I mentioned there helps you have continuous integration (CI), which in turn allows CD. Deployments can then be a normal day-to-day operation. At that point, there's no difference when releasing a new feature or emergency bug fixes.

All changes should pass through the same process, forcing you to decrease deployment time by changing things in small batches. This will make developers happy. You'll improve the mean time to recover (MTTR), and the failure rate for deployments will be lowered, making operations folks happy. The outcomes will please both customers and business owners.

The folks heading up DevOps initiatives in big organizations need to have proper tools and practices in place.

Developers Should Be on Call

The next key to DevOps at scale is that developers have access to the code and are responsible for its performance at all times. It's their baby, and they need to keep watch over it.

AWS CTO Werner Vogels summed up the idea behind developers being on call when he said, "you build it, you run it." Operations folks usually end up calling developers when there are unknown problems they can't solve. Who better to fix a bug than those that introduced it in the first place?

That worked for AWS and Netflix, where developers can push their changes by themselves. But this would be ludicrous at some organizations, especially the big ones. Some companies are under regulations specifying that only certain people have access to production. In this case, having a developer on call is useless because they can't do anything to fix it.

So sometimes this becomes more of a mindset than anything else.

Developers could have access to a centralized logging tool or other metrics with read-only access. They can have visibility into what's happening. Then developers will tell operations how to fix it. And because they were interrupted in the middle of the night, they'll create something to fix it automatically. Developers will make changes in code to avoid that happening again.

Too many times I've seen and experienced that operation folks get tired of reporting the same issues over and over again without any response from developers. Having developers on call makes them aware of issues.

This key is related to the first one, about creating culture in the organization. When people are brought together, even if it's forced in cases like this, it makes the team more accountable for their actions. It also fosters continuous improvement, which is essential in every DevOps initiative.

Shift Change Management to the Left

DevOps is about shifting things to the left. Change management is one of those things that can be shifted—and doing so is another key to DevOps at scale.

Change management is very common in large organizations. They usually have a change advisory board (CAB) that evaluates and approves each change going to production. Most of the time, big organizations think DevOps is not for them because of change management and compliance. They can't automate that process because they're regulated.

Jez Humble has a really good talk about the topic. In it, he said he's been involved in some projects for the government, applying continuous delivery principles to highly regulated agencies.

The project cloud.gov was born after this, which is a platform that helps government agencies host projects in the cloud, applying continuous delivery principles. Cloud.gov assures
that all regulatory controls are in place, making the case for you to include automation for compliance and change management purposes.

Auditors actually love this because there's always a trail of every change in the application code that goes live. They can easily see for themselves when changes were approved, who approved them, when they were released, and who released them. This is more effective than sharing screenshots.

But it's not just about automation; it's about including those verifications and sign-offs early in the process, when we can pay attention and delay things cheaply. Pair programming or peer-based review is better than having managers reviewing the changes in the CAB meeting. They only evaluate the risk, not what the developers changed. What's better than having a peer changing the code with you?

What Are the Keys? People, Process, and Technology

You don't need containers, orchestrators, or microservices to do DevOps. Even if you adopt those new, hot technologies but don't apply everything we just talked about, you've simply wasted time and made things more complex. You can apply DevOps with the people and technology you already have. Process and culture will definitely need to change, and that's usually the hardest part. The technology part is usually the easiest.

It's also important that you increase the quality and the amount of feedback. Waste and inefficiencies will always be caught when they're monitored constantly. DevOps is about integrating people, process, and technology. And at scale, things might seem more complicated. You need to find your own way, and your journey might not be the same as others'. But DevOps also works for big organizations—I promise.

Author:  Christian Melendez 

Related Reading: Pitfalls with DevOps at scale.

 

 

Enterprise Release Management Success

5 Barriers to Successfully Implementing Enterprise Release Management

The sister of IT & Test Environments Management is Release Management, and when it comes to delivering the capability at scale, that is at an organizational level, then we are talking “Enterprise Release Management” (ERM). ERM is the overarching governance framework that brings our different stakeholders, projects and teams together so we can effectively scope, approve, implement and ultimately deploy our changes effectively and in unison.

One could say, ERM bridges the gap between corporate strategy and DevOps.

 

However, implementing an ERM framework isn’t necessarily done, as many companies still don’t do it, nor is it necessarily trivial. In fact, there are many barriers to successfully implementing ERM and ultimately ensuring effective and scalable delivery.

With that in mind, let's look at a few of the top barriers to ERM success.

  1. End-to-End Process Verification

One of the first steps to successfully implement ERM is to ensure that each stage in the end-to-end pipeline is complete and valid. It's critical that you don't overlook some very important processes in your pipeline like compliance or security assertions. Verifying these types of processes in the software development lifecycle (SDLC) is a barrier in ERM because it is complicated and requires a large number of revisions.

When a project starts, the workflow is simple. But as it grows, things get more and more complicated. You need more people, more dependencies, more checks, more systems, and more software changes. Then you need a standard process. Unified practices and processes help organizations ensure that every state is completed properly.

Integrating multiple projects and different departments is challenging. Every department has different goals and objectives, so it's not surprising that some departments have conflicts with each other. C-level executives put pressure on managers, who then put pressure on developers, to deliver on time. At the same time, developers pressure IT operators to release software changes faster, but IT ops are responsible for the system's stability and have to consider how every new change might put that stability at risk. This chain creates conflict constantly, sometimes forcing personnel involved in SDLC to break the rules and bypass some processes.

As you grow, it will become more difficult to ensure that every stage of the SDLC is completed. Help your teams understand why every piece of the ERM puzzle is important and be clear why some processes can't be left incomplete. A good set of checkmarks (or milestones and gates) will definitely make the verification process less painful. But for that, you need automation, which brings us to the next barrier.

  1. Manual Procedures

The easiest way to start managing portfolio releases is with manual procedures. But having a spreadsheet where you control the priorities and release dates of projects doesn't scale well when you need to integrate more people and projects. Automate every task, process, and documentation you can. Don't think about it—just do it. Breaking the barrier of manually managing releases will let you focus on how to speed up software delivery.

Not everything can or should be automated—you still need someone to decide which releases are good to go and when. But getting rid of even some manual procedures is going to change the way you do things in your organization. The process will start by centralizing requests (or registration) for changes, which can come from several sources like marketing, management, and customers. Then you can start recording how these requests move through SDLC.

Leaving trails in the end-to-end process helps create status and compliance reports. You can easily leave trails when you integrate automatically—not only teams and projects, but also build, configuration, and deployment tools like Jenkins, Ansible, or Spinnaker. When you integrate tools like these into your ERM, you'll have information on how much time it takes for the company to deliver software change and complete critical tasks and you'll be able to prove to auditors that you're taking care of everything the company needs in order to comply with regulations.

Manually managing enterprise releases will give you headaches. People will end up spending too much time on things that don't necessarily add value to the customer. Break this barrier with automation because life is too short to keep doing boring things manually.

  1. Visibility for the Entire Workflow

It's not easy to get the big picture of the release pipeline. There are too many required processes in the enterprise's journey to deliver software changes. And leaving a trail in each stage is the key to increasing visibility. A lack of visibility is a significant barrier to improving the software release process.

Knowing where you spend the majority of your time in the release pipeline lets you focus your work efficiently. Maybe you'll find out that you're spending too much time manually configuring applications. In that case, you might need to centralize and automate configuration management.

Or maybe you're spending too much time releasing software after it has been approved in a certain environment, like dev. That could be a sign that you're doing deployments differently in each environment. You might need to focus on preparing a good test environment framework.

But having more visibility isn't just about improving the process. If management wants updated information about how the project is going, they don't need to interrupt the team. Is the project going to be ready for the agreed release date? In which stage is it currently? Did the software pass all tests? Which team has the hot potato now? All sorts of information tell the story and status of a project, and it's all critical to the success of ERM. And you need to have that information up to date and in real time. Having enough visibility helps the team to communicate and coordinate better.

  1. Long Delays Between Releases

One of the main purposes of the ERM framework is to spend less time on releases. At least, that should be the expectation. The amount of time you spend on releases could become a barrier to succeeding in ERM.

ERM is a good framework to plan the date of a release, and that's good to know. But knowing that a release is going to happen in two months doesn't add a ton of value to customers or to management. By that time, it might be too late to release software changes, especially if you need to fix anything after the fact. When you have enough visibility, it becomes obvious where you can reduce the time a release takes.

The amount of time between releases can also increase if you fixed a bug that broke something else at the same time. You won't expect to have a high rate of deployment failures when you implement ERM.

Reasons may vary, but here's a list of things that I've seen affecting time between releases:

  • Needing to coordinate between too many teams, usually in a sequential manner, not in parallel.
  • Waiting to have enough changes to release because it's "less risky."
  • Lacking production-like environments, which help maintain consistency.
  • Hermetically building and packaging the artifacts more than once.
  • Dealing with messy and complex configuration management.

ERM should aim to break the pattern of long delays between releases by working to improve software architectures. There are practices like infrastructure as code, configuration management, self-provisioned environment, and others that ERM can orchestrate.

  1. Same Release Cadence for Everyone

Not only does waiting for someone else to finish their part of the job to release a software change cause a lot of problems, but it could also become a barrier to ERM's success. You could end up accumulating too many changes, which, as we just discussed, increases the risk of failure and the time between releases. What should the team that's waiting do? Keep waiting? I don't think so—they usually start working on something else, which means they could be interrupted constantly to help with the pending release.

Let me give you an example of a common scenario. The developers have just finished their code changes to use a new column from a table in the database. Now they need to wait for the DBAs to add the new column, and after that, the code is ready for the release. Seems like a decent system, right? Wrong. That's just a sign that the architecture is highly dependent on the database.

Developers should be able to release software without having to worry that any dependency—like the database—has everything the application needs. What if database changes take a while to be ready for the release? No problem; the application code is using feature flags, and it's just a matter of releasing the code with the feature turned off.

What if the database changes have problems after the release? No problem; the application is resilient enough to expect problems in the database. Now that's a way of working that doesn't need complex solutions. Code apps so that they're ready for a release at all times.

If you miss the train on a release, you need to wait for the next one to come, and that should be OK. It's better to wait to release something until you are sure that it won't have a negative impact on the company's revenue. But as soon as it's ready, ship it.

Conclusion: Agility Helps Break Barriers

As you can see, not all the barriers I described are related to a specific stage of deployment. Many teams play significant roles in the success of ERM, beginning when a project is planned in management offices to when it's being delivered to end users.

Automation; traceability; versioning control; decoupling architectures; releasing small, frequent changes; and other DevOps and Agile practices are going to help you successfully implement ERM. But also having the proper tools in place are key. Because if you only have a hammer, everything looks like a nail.

Barriers to a successful enterprise release management will always exist. But as long as you continue to find opportunities to improve, success is practically guaranteed.

Contribution by Christian. Christian Meléndez is a technologist that started as a software developer and has more recently become a cloud architect focused on implementing continuous delivery pipelines with applications in several flavors, including .NET, Node.js, and Java, often using Docker containers.