Top 20 DevOps Tools 2018. IT and Test Environment Management Winners.

Rise of the TEM Tools (DevOps Winners 2018)

After years in obscurity it appears that IT & Test Environment Management tools are now starting to get noticed.

This year saw several IT Environment Management products being nominated by CIOReview as the most promising DevOps tools. Most probably in recognition of the importance of managing your IT Environment & Operations to achieve "DevOps at Scale" and ultimately better manage you cloud landscape.

Congratulations to the winners Enov8 and the other CIO Review Top 20.

--- News Source ---

Enov8, the Enterprise IT Intelligence company, has been named winner of this year’s, 2018, CIOReview “20 Most Promising DevOps Solution Providers”.

SYDNEY, NSW, AUSTRALIA, July 17, 2018 /EINPresswire.com/ -- Enov8, the Enterprise IT Intelligence company, from Sydney Australia, has been named winner of this year’s, 2018, CIOReview “20 Most Promising DevOps Solution Providers”.

A somewhat unique winner, Enov8 goes “against the grain” of providing a “simple” DevOps point solution, and provides a broader “business intelligence” platform that helps organizations run their Enterprise DevOps more effectively through the embracement of methods that help the organization better understand and manage their complex IT landscape and delivery chain operations.

In response to the news, Enov8’s Executive Director, Niall Crawford said: “CIOReview selected a great list* of companies this year. To be selected as the overall winner of this prestigious award was fantastic and I believe it was partly in recognition that the industry is evolving and organizations now need to address the ‘DevOps at Scale’ problem and, off-course, an endorsement of Enov8’s hard work and innovative approach in this space.”

CIOReview Top 20 List*: https://devops.cioreview.com/vendors/most-promising-devops-solution-providers-2018.html

Note: This award follows some of Enov8’s recent achievements which include various big wins with global enterprises (banking, telecom and retail) and significant partnership announcements with some of the world’s largest technology service provider.

About Enov8
Enov8 is a leading “Enterprise IT Intelligence” and “Enterprise DevOps” innovator, with solutions that support IT & Test Environment Management, Release Management and Data Management. Enov8’s philosophy is to help organizations be “Agile at Scale” through enhanced transparency and control of their IT landscape and operations.

About CIOReview
CIOReview is a leading technology magazine that has been at the forefront of guiding organizations through the continuously disruptive landscape and providing enterprise solutions to redefine the business goals of enterprises tomorrow. Recognized as a principal and reliable source of information, CIOReview offers a ground-breaking platform allowing decision makers to share their insights, which in turn provides entrepreneurs with analysis on information technology trends and the broader environment.

Reference: EINnews.

 

Service Virtualization for Test Environments

Test Environment Best Practice – Service Virtualization

Service Virtualization

Service virtualization is an evolution of methods like "mocking" and "stubbing." These techniques allow you to replace dependencies in test environments, enabling teams to keep testing their software without the constraints created by services out of their control.

It's fairly common to rely on external resources that are expensive to use and/or not available to every developer. Some teams try to work around these limitations by writing their own mock code. Simulating a few interactions might get them results in the short term, but it’s unproductive to keep this effort isolated.

An alternative is service virtualization, which allows teams across your organization to use the same simulated dependencies effectively. In this post, I’ll talk about the benefits of service virtualization and highlight how having a clear process, consistent governance, and operation standards are vital to implementing a platform like this.

What Are the Benefits of Service Virtualization?

When testing in the early phases of your development cycle, developers often don't have access to all the necessary systems that will make software testing fully representative. Reasons may vary, but common examples include systems on legacy platforms, prohibitive licensing costs, and systems that generate financial transactions or cost money each time they’re queried.

This kind of constraint creates bottlenecks, forcing your teams to wait if they want to access resources. Service virtualization will enable them to replace dependencies with components and data that reliably resemble the production site.

When Should We Use Service Virtualization?

Teams that have restricted access to service dependencies tend to produce code with lower test coverage. In many cases, these teams delay most of their testing until later development phases. But fixing bugs in the later stages of the development process is more expensive, not to mention frustrating for developers.

By using service virtualization as early as possible in your development process, you're enabling your teams to increase their capacity to fix bugs that would show up only in a production environment. This is more efficient, and it gets you closer to achieving true DevOps.

How Do We Implement It?

Once you’ve picked a certain framework or tool for service virtualization, select a service with a high rate of interaction problems. Figure out your best candidate for "problematic service dependency." Record enough data to replicate this component, set up a virtualized service, and configure your code to interact with it.

In order to standardize the use of this new resource, there should be a unique source of truth where you store (and can later find) your virtualized service specifications. One common option is a version control repository like Git. Having this source of truth will help you increase adoption among other development teams. As soon as other people start reaping the benefits, they’ll be interested in virtualizing other troublesome service dependencies.

You should also create guidelines to help teams follow standards and conventions. If your organization is implementing service virtualization through an open source framework, you’ll have to enforce standardization yourself; your virtualized services should follow a common set of rules so they’re reusable across teams.

How Do We Keep a Virtualized Service Close to the Real Thing?

"Recording sessions" is a common method of service virtualization. It facilitates the process of capturing data that can be later used to simulate inputs to client services. The whole purpose is to reply to requests made by your tested code with representative samples of data.

There are other non-functional requirements you could validate, like simulating network instability or unusual response times. This is valuable for developers of mobile applications who can't assume their client application is going to enjoy a stable connection at all times.

Last but not least, persuade people to have their virtualized services automated and continuously tested. Service virtualization will be effective only when it closely represents how data is actually served in production. In order to keep up with external changes, it's critical to have automatic and reproducible processes. If you start to slack on testing, your virtualized services will become obstacles.

Whose Work Is All This?

I've seen software shops where every possible form of testing is sent to a quality assurance (QA) team. I'm not a supporter of this method.

Testers have unique skills, but their role is assisting developers when they write tests for their own code. I see service virtualization as an enterprise-wide traversal process. It needs involvement from all these team members:

  • Developers. Because of their familiarity with your business codebase, developers can quickly reason how each service interaction will affect program behavior.
  • Operators. They can quickly map how interactions may turn into failure scenarios under real-world conditions. Even if your organization has transitioned to DevOps, operators usually control firewall restrictions and endpoint configurations that are necessary for implementing service virtualization.
  • Testers (QA). They are deeply involved with reproducing conditions that might occur when your end users operate your software. Testers also serve as an active audience in the integration process, which gives them a holistic understanding of your service interactions.
  • Test Environment Managers. To ensure these "Virtual Systems" are properly advertised, provisioned, configured and shared/coordinated across the consumers (Test Teams & Projects).

Wait, Isn't This More Work?

A group of early adopters may absorb additional work in the initial phases of service virtualization. But they'll learn and document processes that other teams can then include in their development cycles, which will improve efficiency overall. Make sure to raise awareness of your initial scope to avoid frustration caused by unmet expectations. Transparency and clear goals are very important.

How Do We Get Started?

Any decision to implement a new tool should be made within the boundaries of your budget; that's why it's vital to know what dependencies are worth simulating. Locating bottlenecks in your development cycle is the first step. Then, evaluate how much your cycle could improve by using service virtualization and select a service you can quickly implement as a prototype.

Consult your teams. Ask them for input on how to obfuscate data to avoid security issues, and be sure to involve the necessary stakeholders to provide the basic infrastructure. Emphasize the importance of putting all these pieces together through automatic and reproducible methods.

A Word of Advice: Remember to Measure

Find metrics for lead times, time to market, or any other relevant measurements from your software development cycle in order to find spots where you can quickly improve. This will help your team create valuable impact in the short term.

Project managers would rather assign long time-frames that might allow projects to finish ahead of schedule, rather than tight deadlines that might cause projects to push back their due dates. Pay attention to these patterns; you might detect hidden delays in processes like infrastructure deployment and testing reconfiguration. Look for these situations as opportunities to automate, standardize, and promote change.

Authored by: Carlos “Kami” Maldonado

Test Environment Management Best Practices: Using Containers

Containers have been gaining popularity since their inception in 2001, particularly in the last few years. According to the official Red Hat blog post on the history of containers, they were originally created in order to run several servers on a single physical machine. There are significant advantages to using containers. You may have either the systems under test or automated tests running in containers—or both! This post describes best practices for managing a test environment that runs containers.

Define Ownership

It's important to define ownership of the container environment. If your test environment management (TEM) team is separate and has its own budget, most of the ownership will fall to them. Most of the guidance given by Docker regarding ownership of clusters in production applies equally to TEM.

Employing Containers

Use Containers for Tasks

The test environment itself may use containers for running tests on demand.

You can use containers to:

  1. Run applications/services.
  2. Perform tasks.

Tasks you can perform include smoke testing, load testing, and other types of automated tests. Since task containers are throwaways, you benefit from being able to free resources immediately after the task is run.

Use Clusters

Containers have scaled beyond the original intent of running multiple independent servers in relative isolation on a single machine. Clusters of host servers are common these days. You deploy containers to the cluster without having to directly manage servers for each application's requirements. Instead, you define the requirements for the container and the cluster management system runs the instance appropriately. Some noteworthy cluster management systems include Docker swarm and Kubernetes.

Cloud Hosting Services

Cloud services offer container hosting as a service, which is yet another level of abstraction from managing servers. These clusters are fully managed by the cloud provider. Results may vary based on your environment, but I've found that using cloud services to run tasks is beneficial, as it reduces the amount of scaling needed in a self-managed cluster. Also, hosting applications and services in self-managed clusters across your cloud VMs can lead to significant cost savings when running containers over longer periods of time.

Define Limits

Container memory, CPU, and swap usage limits should be set appropriately. If they are not set, the container instance will be allowed to use unlimited resources on the host server. This host server will kill processes to free up memory. Even the container host process can be killed; if that happens, all containers running on the host will stop.

Validate Configurations

Test for appropriate container runtime configurations. Use load testing in order to determine the appropriate limits for each application version and task definition. Setting the limit too low may cause application issues; setting it too high will waste resources; not setting it at all may result in catastrophic failure of the host.

Use a Source Control Management System

Container definitions are specified in a relatively straightforward text file. The host uses the text file to create and run the container instance. Often, container definitions will load and run scripts to perform more complex tasks rather than defining everything in the container definition (for Docker, that's the Dockerfile).

Use a source control management system such as Git to manage versions of container definitions. Using source control gives you a history of changes for reference and a built-in audit log. If a bug is discovered in production, the specific environment can be retrieved and rehydrated from source control. Because you can quickly recall any version of the environment, there is no need to keep a version active when it's not under test.

Create a New Container Per Version

It's best to create a new container for each version of each application. Containers are easy to run, stop, and decommission. Deploy a new container instance rather than updating in place. By deploying a new instance, you can ensure that the container has only the dependencies specific to that version of the application. This frees the running instances from bloat and conflicts.

Running instances have names for reference; use the application and version in the instance name, if possible.

If dependencies haven't changed, the container definition itself doesn't need to change. The container definition is for specifying the container itself (the OS and application dependencies). Specify versions of dependencies and base images rather than always using the latest ones. When dependencies change (including versions), create a new version of your container definition or script.

To clarify, here is a snippet from a Dockerfile for node 10.1.0 based on Linux Alpine 3.7:

FROM alpine:3.7

ENV NODE_VERSION 10.1.0

...

    && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \

    && curl -SLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \

    && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \

    && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \

...

CMD [ "node" ]

If you were running node scripts, you might create your own Dockerfile starting with FROM node:10.1.0-alpine. This tells docker to use this specific base image (node 10.1.0 running on Linux Alpine) from the public image repository—Docker Hub. You would then use the remainder of your Dockerfile to install your application specific dependencies. This process is further described here.

Avoid Duplication

There should be a single source of truth for each container definition. All deployments in all environments should use that source of truth. Use environment variables to configure the containers per environment.

Design container definitions for reuse. If you find that only certain parts of a definition change, create a base file for the parts that stay stable and move the ever-changing parts into child files.

Monitor

Monitor your running container instances and the cluster environment. Monitoring allows you to flag events that could indicate defects in the system under test. These defects may go unnoticed without a way to measure the impact of the system on the environment.

When working with clusters, monitoring is essential for auto-scaling based on configurable thresholds. Similarly, you should set thresholds in your monitoring system to trigger alerts when a process is consuming more resources than expected. You can use these alerts to help identify defects.

Log Events

Logging events from your test environment can mean the difference between resolving an issue and letting it pass as a "ghost in the machine." Parallel and asynchronous programming are essential for boosting performance and reducing load, but they can cause timing issues that lead to odd defects that are not easily reproducible. Detailed event logs can give significant clues that will help you recognize the source of an issue. This is only one of many cases where logging is important.

Logs realize their value when they are accessed and utilized. Some logs will never make it to the surface, and that's OK. Having the proper tools to analyze the logs will make the data more valuable. Good log analysis tools make short work of correlating events and pay for themselves in time.

Use Alerting/Issue Tracking Strategically

Set up alerts for significant events only. A continual flood of alerts is almost guaranteed to be less effective. Raise the alarms when there is a system failure or a blocker. Batching alerts of lower priority is more efficient, as it causes less disruption to the value stream. Only stop the line when an event disrupts the flow. Checkpoints like gates and retrospectives are in place for a reason. Use them, along with issue tracking systems, to communicate non-critical issues.

Summary

Containers are being used nearly everywhere. They're continuing to gain traction, especially as cloud hosting providers are expanding their container hosting capabilities. Understanding how to manage containers and their hosting environment is important for your test environment management capabilities. You should now have a better idea of what to expect when using containers and how you can most effectively manage your container environment.

Author: Phil Vuollet

Phil uses software to automate process to improve efficiency and repeat-ability. He writes about topics relevant to technology and business, occasionally gives talks on the same topics, and is a family man who enjoys playing soccer and board games with his children.