VSM Need for Speed

Test Environments: How to Value Stream DevOps With TEM

For many organizations, DevOps is the best practice for efficiency. However, this model doesn't come easily as the organization needs to put certain things in place. For example, the firm needs to incorporate the right tools to ensure its delivery pipeline and feedback loop are working as expected. Many firms get it all wrong when there's a problem in their delivery pipeline or feedback loop. This will cause issues for the firm as there's a loss of time and an overall reduction in efficiency.

To avoid an occurrence like this, firms need to ensure their DevOps model is efficient and adds value to customers. For these reasons, firms adopt the test environment management (TEM) model to check that their model works as expected. Sometimes, this may seem like a lot of work if not done correctly.

In this article, we will explore what test environment management is and how an organization can use it to measure and add value to a DevOps model. First, we'll define DevOps, the value stream, and test environment management. Then, we'll explain how and why you should value stream DevOps with TEM.

VSM Need for Speed

Defining Our Terms

To get us all on the same page, let's discuss DevOps, the value stream, and test environment management.

DevOps

A company's first priority should be satisfying their customers' needs. For software organizations, this involves shipping out features to end users as quickly as possible. To do this, software engineers make use of the DevOps model. DevOps consists of rules, practices, and tools that let the software engineering team deliver products to end users faster than traditional methods would allow. In conventional methods, the people responsible for a project's operation and the people responsible for a project's development are on distinct teams. This isn't the same for DevOps. In DevOps, development engineers and operations engineers work closely together throughout the application life cycle. This structure decreases handoffs, waiting time, and communication breakdowns to create a speedy development cycle.

The Value Stream

When developing or improving products for end users, companies need to understand what their customers really want. A company might add new features to their product, but the new features won't help them if they don't speak to the users' needs. Some features, if shipped to users, might reduce customer engagement with your product because they're not wanted or broken. It's discouraging to develop a feature tirelessly only to find out that users don't like it. How do you know that your features will please your customers? This is where the value stream comes into play. A value stream is the sequence of steps an organization takes to develop software. Ideally, each step in the development cycle adds value to the product. By analyzing their value stream, an organization can learn which development stages provide the most return on investment and which could be improved. For example, if your value stream includes a long wait time between building code and testing the code, you can guarantee that reducing the wait time between these stages will add value to your product. Value streams help the firm measure, monitor, and incorporate what will bring value to customers at the end of the day.

Test Environment Management

Before shipping new features or products to users, it's a good practice to test their functionality. Developers should know how responsive their application is from the perspective of a user. For example, you don't want a part of your product to be broken, unresponsive, or inaccessible. Such features will deter customers from using your product and may lead to negative reviews, which deter customers even more. To test software's functionality before shipping it to users, engineers make a test environment. A test environment is like a simulator: it allows you to imitate your application's operation and functionality. Basically, you're seeing your product and interacting with it as a user would. The testing environment has maturity levels, which are different protocols and practices depending on the state of your app you can follow when testing your application functionality. TEM consists of sets of procedures or guides that help developers create valid and stable test environments for their products. It allows you to control the test environment itself through things like communication, test cases, automation, bug detection, and triage. For example, you may want to test the overall responsiveness of your product. To do this, you first have to test the functionalities of smaller features. Next, you'll have to review product defects and implement measures for optimization.

Putting It All Together: Value Stream DevOps With TEM

Now that you know what DevOps, the value stream, and TEM are, it's time to learn how they can work together to help you innovate and delight your user base.

Focus on Time and Value

There are a lot of things to consider when shipping products to users. These can be summed up into time and value. Let's imagine a case where a firm ships a feature to users on time, but it's unresponsive. While time was met in delivering this feature, value wasn't. At the end of the day, you get unsatisfied customers who won't be happy at the firm's choice of feature. Another case is when the company doesn't ship features on time. When this happens, you get angry customers who don't seem to understand why it's taking your team so long to release new features. For software firms to really up their game, they have to ship features that add value to customers at the right time. However, the processes of DevOps, value streaming, and TEM will prevent these things from happening. These three methods create automatic checks in your software development cycle that stop you from pursuing projects customers won't like. And guardrails will keep you on schedule to deliver products in a timely fashion. This might sound complicated, but it's easy to get started.

How to Value Stream DevOps With TEM

In this section, we'll explore ways to ship features that add value to users at the right time through a combination of DevOps, value streaming, and test environment management. These are ideas for you to start devising your own DevOps–value stream–TEM strategy.

Logging and Testing

Often, it's difficult to aggregate logs during the developmental stage of a product. Most developers don't find out that the tools they use for logging don't aggregate logs properly until they're in the right test environment. For an application that depends hugely on logging and tracing, this may be a problem for users. Black box testing also doesn't allow developers to see the products from the customer's perspective. There could be bugs in the application's UI which may be overlooked. Some of these bugs cause unresponsiveness—which, as we discussed, can spell disaster for a product. All these can be mitigated when developers incorporate the right test environment.

Elimination of Redundant Procedure

Numerous firms make the mistake of incorporating redundant and wasteful processes in the development stage when there's no test environment management. Developers can fish out and eliminate redundant and wasteful procedures with test environment management. This will save the firm time and money, creating value for customers.

Visual Representation and Process Clarity

Visual representation and clarity are another way to value stream DevOps with TEM. Test environment management provides developers with a visual representation of each feature and how much value it adds to the product, thereby clarifying which elements are vital to a product's success and which could be improved.

Maturity Levels

Maturity levels tell the engineers the next step to take when testing a product. Policies are written for each step and every unit of the application tested. The engineer isn't testing the application by intuition or suspicion. Rather, there's a carefully planned guide on how to best test the application. It's imperative to understand and apply different maturity levels because it allows developers to measure the readiness of their test environment and define the process they'll use in test environments.

Feedback Loop

After shipping products to users with the DevOps model, there's a feedback loop. The feedback loop involves monitoring responses from users and incorporating that feedback as a feature in the next release. Feedback loops help developers determine what kind of feature and test environment they'll be working on and the type of test policies to write in the different maturity levels.

Integrate TEM and DevOps Seamlessly

DevOps remains one of the best models software engineers use to ship products to users. In this article, we have explored how engineers can ship products that add value to users at the right time with test environment management and value stream mapping. These practices give rise to several strategies for improving the time spent on features and value delivered by features, including logging and testing, eliminating redundancies, visually representing the product, assessing the feature's maturity, and creating a feedback loop. Test environment management can become an overwhelming task if you don't use the right tools and procedures. For example, there's the difficulty of choosing the right test environment and eradicating redundant procedures. You can integrate DevOps in the right test environment easily with test environment management resources from Enov8. These resources offer various tools like data sheets, visuals, case studies, and white papers to help integrate your DevOps model in the right test environment.

Author

This post was written by Ukpai Ugochi. Ukpai is a full stack JavaScript developer (MEVN), and she contributes to FOSS in her free time. She loves to share knowledge about her transition from marine engineering to software development to encourage people who love software development and don't know where to begin.

Measuring Your Test Environment Maturity

The goal of every company is to satisfy its users. This certainly applies in the software industry. However, as the number of users increases, they tend to make more demands. Increased demands will increase how complex software is, as these demands may require adding new features. And of course, software firms try hard to control defects in their products whenever they add a new feature.

Nevertheless, the industry is still far from zero defects. To avoid defects in products shipped to users, firms in the software industry must pinpoint defects in their test environment before shipping products to users.

What's a test environment, and how are developers making sure that they can find and cure defects in that environment? We'll discuss both topics in this article.

What Is a Test Environment?

A test environment is like a simulator that provides real-life visual representation. It includes a server that allows developers to run tests on their software.

A test environment also allows developers to include hardware and network configuration. The purpose of this is to let the test engineer mimic the production environment so that they can find defects. Also, test engineers can write custom tests and execute them in the test environment. This lets test engineers ensure that the software is responding as it ought to.

 

Let's look at how test engineers make sure their test environment mimics the production environment. When that happens, the team can remove issues and defects from software before shipping it to users.

What Is Test Environment Maturity?

Test environment maturity is a set of leveled guides that help test engineers determine how well-developed and rigorous their testing system is. Test engineers need to understand how the products they're about to test actually function. The engineers should also be able to define the process they'll use in test environments and manage those environments. And there are different levels of test environment maturity.

 

To understand test environment maturity better, let's look at the Test Maturity Model (TMM). We'll examine the different levels and find out how test engineers can measure environment maturity.

Test Maturity Model (TMM)

In order for test engineers to manage their test processes properly, the Illinois Institute of Technology developed the TMM framework. This framework works well with the Capability Maturity Model (CMM), which is the industry standard for software process development.

 

The TMM framework defines five maturity levels so that test engineers can manage their testing processes properly. These maturity levels help test engineers identify the next improvement state in their test environment.

 

Test engineers can't measure their test environment maturity if they don't know the level of maturity of their test environment. This is exactly what the TMM maturity level does. It displays levels of maturity and the steps required to attain each level.

Maturity Levels

Each maturity level consists of steps that are essential to attain test environment maturity. Let's look at the different TMM maturity levels and consider how test engineers can measure their test environment maturity.

1. Initial Level

In the first level in the TMM framework, the goal of the test engineer is to ensure that the software is running successfully. The goal here is simply to make sure that the software developers have developed a working product. Although TMM doesn't identify any process area for this level, the software should be working fine without breaking. So Level 1 has a low bar!

2. Definition Level

Definition is the second maturity level in the TMM framework. In addition to ensuring that the software is running successfully in the test environment, the test engineer needs to define test policies. This is because at this maturity level, basic testing methods ought to be in place. You're trying to answer the question, "Does the software do what it's supposed to?"

 

The different process area that this level identifies are:

  • Test policies and goals: This is to make sure that test engineers specify goals and policies they need to achieve.
  • Test methods, techniques, and environment that test engineers are using: It's essential to spell these out.

3. Integration Level

This level involves the integration of testing methods, techniques, polices, and environment defined in the definition level. It's necessary to do this so test engineers can determine software behavior. During the integration level, the engineers test life cycle and integration. Completing this step ensures testing is organized and carried out in a professional manner.

4. Management and Measurement Level

This TMM maturity level ensures that test engineers carry out quality test processes. At this stage, developers can evaluate and review software for defects. For example, after the integration level, the test engineers need to make sure they pick out all of the defects. The process areas this level identifies are test measurement, evaluation, and reviews.

5. Optimization Level

This is the final level. At this stage, the aim is to ensure that test processes and environment are optimized. This maturity level is important because testing isn't effective unless defects are controlled. In this level, the team members figure out how to prevent defects. The process areas in this level are test improvement, optimization, and quality control.

Best Practices in Measuring Test Environment Maturity

We've explored the different maturity levels for TMM and discussed how this model is the industry standard for software testing. In this section, we'll explore the best practices for measuring test environment maturity.

Hire a Test Engineer

A test engineer is in charge of carrying out tests on software to make sure it performs as expected. It's important to employ a test engineer to manage software testing. Why? Because a qualified test engineer is highly skilled in using the right test environment, techniques, and tools.

Understand the Test Maturity Model

When you employ a test engineer for your firm, make sure that they understand the test maturity model. This is because they can't measure what they don't understand! Fully understanding the test maturity model will enable the test engineer to determine which processes are covered in each level and precisely what level their test environment has gotten to.

Don't Skip Steps

It's a bad practice to skip or merge different levels of the maturity models. This will not only make software testing confusing, but it may also produce adverse test results. Therefore, direct test engineers to write down the maturity levels and proposed date of completion before beginning to test.

Automate Testing

When test engineers automate testing, it becomes easier and faster to measure test environment maturity. For example, this test environment and management tool from Enov8 allows test engineers to automate tests and manage test environments without a hitch.

Measuring Test Environment Maturity Goes Better When You Understand Test Environment Management

Knowledge of TMM maturity levels isn't enough to measure test environment maturity properly. To do so, test engineers need to be familiar with test environment management (TEM) and how it applies to TMM. So, let's explore TEM.

 

Test environment management, according to Enov8, is the act of understanding IT environments across the life cycle and proactively controlling them to ensure they're effectively used, serviced, and deleted promptly. With test environment management, test engineers can easily analyze software capability. This is because proper test environment management allows test engineers to measure test environment maturity properly. For this reason, there are tools like Test Environment Management Maturity index (TEMMi) to help firms understand test environment management.

Author

This post was written by Ukpai Ugochi. Ukpai is a full stack JavaScript developer (MEVN), and she contributes to FOSS in her free time. She loves to share knowledge about her transition from marine engineering to software development to encourage people who love software development and don't know where to begin.

What Is Data Virtualization

Data has undergone a huge shift from being an unimportant asset to being the most valuable asset a company holds. However, just holding the data doesn’t bring many benefits to your organization. To reap the benefits of the data your company collects, data analysis helps you to find valuable insights for the data you hold.

Data lays at the core of many important business decisions. Many companies prefer a data-driven decision-making policy because it greatly reduces guessing and helps the company to shift toward a more accurate form of decision-making. This greatly benefits the company as you have more trust in the choices you make and you can reduce the number of “incorrect” decisions.

For example, say a product company wants to know if users like the new feature they’ve released. They want to decide if they need to make further improvements to the feature or not. To make a more informed decision, the product company collects user satisfaction scores about the new feature. The company can then use the average user satisfaction score to make this decision. Data virtualization helps you to quickly aggregate data from this survey, as well as other important data that influences the decision, in a single, centralized view. This allows your business to make more informed decisions quicker.

This article introduces you to the concept of data virtualization and how it can help your company to make better decisions. Before we start, what are the common problems companies experience with data?

Common Data Problems for Organizations

Here’s a list of data challenges companies commonly experience:

  • It’s hard to understand the data you’ve collected.
  • Different sources of data use different formats, which makes it harder to retrieve insights.
  • Your organization experiences data lag, which means that data isn’t directly available.
  • Your organization isn’t ready to handle and process data. This could be due to, for example, missing data infrastructure and tools.

Now that you’ve read the above data problems, make sure your organization is ready to handle and process data. So what is data virtualization?

What Is Data Virtualization?

Data virtualization is a form of data management that aggregates different data sources. For example, a data virtualization tool might pull data from multiple databases or applications. However, it’s important to understand that it doesn’t copy or move any of the data. You can have multiple data silos.

Data virtualization is capable of creating a single, virtual layer that spans all of those different data sources. This means your organization can access data much faster since there’s no need to move or copy data. Furthermore, this is a major benefit as you can access data in real time. Virtualization improves the agility of the system, and companies can run analytics faster, gaining insights quicker. For many companies, being able to retrieve insights faster is a great competitive advantage!

As mentioned, data virtualization doesn’t copy or move any data. It only stores particular meta information about the different locations of the data that you want to integrate into your data virtualization tool.

What Is the Importance of Data Virtualization?

First of all, data virtualization acts as the pinnacle of data integration. It allows an organization to integrate many different data sources into a single data model. This means companies can manage all of their data from a single, centralized interface.

Moreover, data virtualization is a great tool for collecting, searching, and analyzing data from different sources. Furthermore, as there’s no data copying involved, it’s also a more secure way of managing your data since you don’t have to transfer the data.

In other words, data virtualization helps companies to become more agile and use their data faster, creating a competitive advantage as you receive analytics and insights more quickly.

What Are the Capabilities of Data Virtualization?

This section describes the capabilities of data virtualization and why they matter for your business.

  1. Agility
    A data virtualization tool allows you to represent data in different ways, format data, discover new relationships between data, or create advanced views that provide you with new insights. The options are endless. Agility is the most important capability of data virtualization as it decreases the time to a solution.
  2. High performance
    A data virtualization tool doesn’t copy or move any data. This contributes to its high-performance nature. Less data replication allows for faster data performance.
  3. Caching
    Caching frequently used data helps you to further improve the performance of your data virtualization tool. Whenever you query for data or a specific data view, part of the data is already cached for you. This puts fewer constraints on your network and improves the availability of your data.
  4. Searchability
    A data virtualization tool allows you to create data views that provide you with actionable insights. Furthermore, data virtualization provides you with a single, centralized interface to search your data.

Next, let’s explore the benefits of data virtualization for your organization.

What Are the Benefits of Data Virtualization?

Here are 10 important benefits of employing a data virtualization tool for your organization.

  1. Helps with hiding the data complexity from the different underlying data sources, data formats, and data structures.
  2. Avoids replication of data to improve performance.
  3. Gives real-time data access and insights.
  4. Provides higher data security as no data is replicated or transferred.
  5. Reduces costs since no investments are needed in additional storage solutions.
  6. Allows for faster business decisions based on data insights.
  7. Reduces the need for development resources to integrate all different data sources.
  8. Allows for data governance to be applied efficiently. For example, data rules can be applied with a single operation to all different data sources.
  9. Improves data quality.
  10. Increases productivity as you can quickly integrate new data sources with your current data virtualization tool.

Now that we have a better understanding of the benefits of data virtualization, it’s time to get serious. The next section explains how you can implement data virtualization in your organization.

How to Get Started With Data Virtualization

Do you want to get started with data virtualization for your organization? The most important tip is to start small. Assign a dedicated team who spends time on integrating one or a couple of data sources. Start with data sources that are most valuable for your organization. This way, you’ll see the benefits of data virtualization quickly.

Next, when your team has completed some simple data integrations, it’s time to scale up your operations and use the tool for most of your data sources. You can think about more complex data models, integrate complex data sources, or use data sources with mixed data types.

Furthermore, you can start to experiment with caching to see where it can be applied effectively to gain the most performance benefits. Remember to apply caching to frequently used data or data models.

As a general rule of thumb, prioritize high-value data sources to reap the most benefits.

Conclusion

One final note: data virtualization isn’t the same as data visualization. The two terms are often used interchangeably, but they have very different meanings. Data virtualization isn’t focused on visualizing data. The main goal of data virtualization is to reduce the effort of integrating multiple data sources and providing your organization with a single, centralized interface to view and analyze data.

In the end, the real business value of data virtualization lays in the agility and faster access to data insights. For many organizations active in the industry of big data or predictive analytics, it’s a real competitive advantage to access insights faster than your competitors. This allows you to make profitable decisions faster than the competition.

If you want to learn more, the following YouTube video by DataAcademy further explains the concept of data virtualization in easy-to-understand terms.

Author

This post was written by Michiel Mulders. Michiel is a passionate blockchain developer who loves writing technical content. Besides that, he loves learning about marketing, UX psychology, and entrepreneurship. When he's not writing, he's probably enjoying a Belgian beer!

Comparing Configuration and Asset Management

When you’re running an IT organization, it’s not just the business that you have to take care of. One part of running a business is building, creating, and providing what your customers need. The other part is management. Out of all the things you have to manage, configurations and assets are two of the most important.

Although people often think of configuration management and asset management as the same thing, but they are different. People also sometimes confuse these terms with each other. So, in this post, I’ll explain what configuration management and asset management are and how they’re different. Let’s start by understanding each of these terms.

What Is Configuration Management?

Configuration management is the management of configuration items. So, what are configuration items?

Configuration Items

Any organization provides certain services. These services might be the ones being provided to customers or to internal users. Either way, creating and providing these services requires some components. So, any component that needs to be managed to deliver services is called a “configuration item.”

Too confusing? No worries—I’ll explain with an example. Consider that you’re providing a service that tracks an organization’s user data. In this case, you can consider the software to be the component that needs to be managed. It’s important that you manage this software to make sure your service works fine. This means that your software is a configuration item. Another way of defining a configuration item is that it’s a component that’s subject to change to make the service delivery better.

What Information Is to Be Managed?

When you manage the attributes of such configuration items, that’s configuration management. So, what kind of information do you have to manage? You have to manage attributes such as ownership, versioning, licensing, and types. Let’s consider an example in which you’re using software for internal tasks.

Now you’ve identified that the software that provides service is your configuration item. The next step is to manage information related to that software. The software developer will have released different versions of the software with updates and new features. You obviously look out for better versions of the software or the version that best suits your requirements. One piece of information that you have to manage is the details of the software versions.

Another example is when you’re using licensed software. The software will be licensed to a particular person or company, and the license will be valid for a certain period of time. Such information becomes the attribute you have to manage. Now that you know what configuration management is, let me tell you a little about how it’s done.

Configuration Management Database

An easy way to manage information on configuration items is by using a configuration management database (CMDB). A configuration management database is just like any other database that stores data, but it specifically stores information related to configuration items.

Configuration Management System

Configuration management isn’t easy. You have to take care of lots of tasks, such as tracking the data and adding and modifying configuration items. To make configuration management easy, you can use a configuration management system (CMS), which is software that helps you manage your configuration items. A typical CMS provides functions for storing and managing CI data, auditing configuration, making changes to the configurations, and so on.

Now that you know what configuration management is, let’s talk about asset management.

Asset Management

In generic terms, anything that’s useful is an asset. If you own a house or a property, that’s an asset for you. So is your car or your phone. When it comes to an organization, anything that’s useful to the organization is an asset. Assets can be capital, office property, the servers locked in your highly secured server room, and so on. But IT assets aren’t limited to physical or material things. The knowledge stored in your employees’ brains is also a valuable asset to your organization.

So, basically, tracking and managing the assets of your organization throughout its life cycle is asset management. The main aim of asset management is to create processes and strategies that help in managing assets properly. The asset management process starts right from the moment of acquiring the asset until disposing of the asset.

For example, let’s say you have an organization that builds and manages web applications. As part of this, you own some servers that you host the web applications on. You also have some databases where you store data for your clients. In this case, your asset management process starts from the time you bought the servers and the databases. You have to manage the buying, maintenance, and inventory costs. Along with that, you also have to take care of regular updates, audits, security implementations, and any changes that you make. This asset management goes on either until the assets are damaged or until they stop being useful to your organization and are disposed.

Asset management directly involves finance. You have to consider the inventory, governance, and regulatory compliance along with the financial aspects in asset management.

Why Do You Need Asset Management?

Asset management helps you understand your financial flow and how to efficiently plan your finances. You can easily track your asset throughout its life cycle. This helps you analyze incidents if something went wrong. Management of assets improves your assets’ quality and performance, which helps your business.

The asset management process helps you stay compliant with various rules and regulations. This improves the quality of your business and also saves you money on audits and fines. Because asset management lets you track your assets, you can plan more efficient strategies for operations.

Configuration Management vs. Asset Management

Now that I’ve explained each of these terms, I hope you understand what they mean. At some point, you might have felt that they were the same. To eliminate any lingering confusion, let me highlight the differences between them.

Asset management is managing anything valuable to your organization. You can consider configuration management to be part of asset management. Configuration management mainly focuses on managing configuration items and their attributes. These attributes mainly affect the delivery of the service.

In the case of asset management, it’s more of a financial perspective. You track the asset to understand the financial flow and need for that asset throughout its life cycle.

To understand the difference, let’s take an example of a hardware component that you’re using—let’s say, a database. When you’re using a database, the database itself becomes an asset. You have to manage the maintenance, track the asset, conduct audits, and so on. This is asset management. The same database will have software versions. Keeping track of the software version, updating it, and tracking which other components it works with becomes part of configuration management.

Configuration management and asset management might sound the same at a high level, but they have different purposes and are implemented differently. Understanding such terms with the help of an example really makes it easy to understand the differences, hopefully, the explanations and examples here have helped you.

Author

This post was written by Omkar Hiremath. Omkar uses his BE in computer science to share theoretical and demo-based learning on various areas of technology, like ethical hacking, Python, blockchain, and Hadoop.

DevOps Tool CHain

What Is a DevOps Toolchain and Why Have One?

DevOps is not a technology, it’s an approach. Though there’s flexibility in how to use it, there’s also the added responsibility of using it in the best possible way. The whole idea of DevOps is to make the software development process smoother and faster. And one of the most important decisions needed to achieve this is to decide on the right toolchain.

So in this article, I’ll tell you what a DevOps toolchain is and why you should have one.

What Is a DevOps Toolchain?

The whole DevOps practice stands on two main pillars: continuous integration and continuous delivery. This means that the changes and upgrades to a product must be integrated at greater frequency, and they should be available to the users at greater speed. A DevOps toolchain is a set of tools that helps you achieve this. But why are multiple tools needed? Why not just use one? That’s because DevOps is a practice that has different stages. To help you understand this, I’ll take you through the different stages of a software development pipeline that’s based on a DevOps approach and review what tools you can use.

DevOps Tool CHain

Planning

The first step of doing anything is planning, and that holds true for DevOps as well. Planning includes the personnel inside the organization as well as the clients. Both need to have a clear understanding of what they want to build and how they are going to do it. Therefore, transparency plays an important role. You can use tools like Slack, Trello, and Asana for the planning stage.

Collaboration

The beauty of DevOps is that it requires multiple teams to collaborate and work together for efficient software delivery. Once the planning is done, you need to focus on collaboration. Collaboration happens between people from different teams, who might have different working styles or live in different time zones. Easy collaboration requires transparency and good communication. Some of the tools available for collaboration include Slack, Flowdock, WebEx, and Skype.

Source Control

Source control aka version control means managing your source code. In DevOps, where there are frequent updates to the source code, it’s important that you handle it carefully. This means you need a tool that can manage the source code and make different branches available as required, especially when multiple teams are working on a single product. Some of the most popular source control tools are Git and Subversion.

Tracking Issues

You should also be ready for issue occurrence. And when it comes to issue handling, tracking the issue plays an important role. Issues should be tracked in a transparent way that provides all the necessary details required to properly resolve them, and improved tracking results in faster resolution. You might want to consider using tools like Jira, Zendesk, Backlog, and Bugzilla.

Continuous Integration

This stage, as mentioned earlier, is one of the most important parts of the DevOps practice. This is the stage where modular code updates are integrated into the product to make frequent releases. It’s commonly known to developers that the code doesn’t always work smoothly when it makes it to production. You need a tool that helps with easy integration, detecting bugs, and fixing them. Jenkins, Bamboo, Travis, and TeamCity are some of the most popular tools.

Configuration Management

When developing a product, you will have to use different systems. Configuration management tools help you in maintaining consistency across systems by configuring all the systems automatically for you. They basically configure and update your systems as and when required. The configuration management tools that are heard of quite often are Ansible, Puppet, and Chef.

Repository Management

DevOps teams work together to release updates as soon as possible, and when multiple teams are working on them, there will be an update every day or maybe even every hour. With this frequency, it’s important to have a tool that manages binary artifacts and metadata. The repository management tools help push the product or a part of the product from the development environment to the production environment. Some well-known tools for repository management are Nexus and Maven.

Monitoring

Monitoring helps you understand how good or bad the release was. When there are frequent updates to your product, you can’t expect every release to perform well. Sometimes certain releases break the product, create security issues, decrease the performance, or bring down the user experience. The best way to understand what your update has resulted in is by monitoring it. Monitoring tools help you decide whether your release needs aid or not. You can use tools like Sensu, Prometheus, or Nagios.

Automated Testing

You’d for sure want to test your code before making it available to the users. When continuous delivery is the goal, manual testing would slow down the process. Automated testing makes the testing process faster because the tool does the testing, and the computer is faster than a human being. Also, there is no chance of human errors. But you have to make sure that the automated testing tool you choose is efficient and reliable because you cannot afford to have any mistakes here. A few tools you can choose for automated testing are QTP and TestComplete.

Deployment

This is the stage that actually delivers your product and its updates to the end users, and there are a few things that may go wrong here. The main purpose of deployment tools is to make continuous and faster delivery possible. Some of the most popular tools used for deployment are IBM uDeploy and Atlassian Bamboo.

Now that you understand what a DevOps toolchain is and which are some of the most used tools in the industry, let’s understand why it’s important to have a DevOps toolchain.

Why You Should Have a DevOps Toolchain

A DevOps toolchain is needed to maximize the positive outcome of DevOps practice, and it’s achieved when you choose your toolset wisely. A wisely chosen DevOps toolchain will show how the DevOps approach helps you build high-quality products with fewer errors and enhanced user satisfaction.

The first advantage of using a DevOps toolchain is that it decreases the defects and increases the quality of your products. Because of features like automated testing and error-checking deployment tools, there is also less room for errors. This is good for your business and the reputation of your company.

The second advantage is that a DevOps toolchain helps you innovate your product faster. Because the toolchain results in faster planning, building, testing, and deploying, you have more opportunities to innovate. The more innovative your product is, the more business you get.

The final advantage is related to incident handling. The toolchain helps you identify and manage major incidents. Doing so facilitates finding solutions to the incidents faster and letting the respective team know about the incident. This helps improve the support and quality of the product.

In Conclusion

Now that you’ve read about what the DevOps toolchain is and why you need it, it’s time to choose which ones are right for you. Even though I’ve mentioned a number of tools for various purposes, the ones you pick will differ based on what best suits your use case. There’s no universal toolchain that works best for everyone. You’ll know what’s best for you only after you understand your requirements and then choose the tools accordingly.

Author

This post was written by Omkar Hiremath. Omkar uses his BE in computer science to share theoretical and demo-based learning on various areas of technology, like ethical hacking, Python, blockchain, and Hadoop.

Why Test Data Management Is Critical to Software Delivery?

Imagine you are developing a system that will be used by millions of people. In a situation like this, a system has to be very well-tested for any type of error that can cause the system to break while in production. But what’s the best way to test a system for any possible system failure because of bugs? This is where test data management comes in.

In this post, I will explain why test data management is critical in software delivery. To develop high quality software products, you have to continuously test the system as it’s being developed. Let’s dive in straight to understanding how this problem can be solved by using test data management.

 

What Is Test Data Management?

Well, in simple terms, test data management is the creation of data sets that are similar to the actual data in the organization’s production environment. Software engineers and testers then utilize this data to test and validate the quality of systems under development.

Now, you might be wondering why you need to create new data. Why not just use the existing production data? Well, data is essential to your organization, so you should protect it at all costs. That means developers and testers shouldn’t have access to it. This has nothing to do with the issue of trust but security. Data should be highly regarded, or else there can be a data breach. And as you know, data breaches can cause loss to an organization.

How Can You Create Test Data?

So, now that we know why we need test data that is separate from our production data, how can we create it?

The first thing you must do is understand the type of the business you are dealing with. More specifically, you need to know how your software product will work and the type of end users that will use the software. By doing so, it will be easier to prepare test data. Keep in mind that test data has to be as realistic as the actual data in the production environment.

You can use automated tools to generate test data. Another way of creating test data is by copying or masking production data that your actual end users will use. Here you have to be creative as well and create different types of test data sets. You can’t rely only on the masked data from production data for testing.

Benefits of Test Data Management in Software Delivery

Test data management has many benefits in software delivery. Here are some of the benefits of test data management in software delivery in any software development environment.

High Quality Software Delivery

When you apply test data management to software delivery, it will give software developers and testers to test the systems and make solid validations of the software. This enhances the security of the system and can prevent possible failures of the system in the production environment. Testing systems with test data gives assurance that the system will perform as expected in the production environment without defects or bugs.

Faster Production and Delivery of Software Products to the Market

Imagine that, after some months of hard work of developing a software application, you’ve just released a software application on the market, only for it to fail at the market level. That’s not only a loss of resources, but it’s also a pain.

A system that’s well-tested using test data will have a shorter production time and excel at the production level. That’s because it’s much more likely to perform the way it was intended to. If the system fails to perform in production because it was not tested well, then the system has to be redone. This wastes time and resources for the organization.

Money Needs Speed

Test data management is critical when it comes to software delivery speed. Having data that’s of good quality and is similar to production data makes development easier and faster. System efficiency is cardinal for any organization, and test data management assures that a system will be efficient when released in production. Therefore, you start generating revenue as soon as you deploy the system.

Imagine having to redo a system after release because users discover some bugs. That can waste a lot of time and resources, and you may also lose the market for that product.

Testing With the Correct Test Data

Testing with good quality test data will help in making sure that the tests you run in the development phase will not change the behavior of the application in the production phase. For example, you might test that the system is accepting supported data by entering a username and password in the text box with all types of data that a user can possibly input into the system.

No matter how many times you test the software, if the test data is not correct, you should expect the software to fail in the production phase. This is why it is always important to ensure that test data is of great quality and resembles your actual production data.

Bug and Logical Fixes

How can you know that the text box is accepting invalid input such as unsupported characters or blank fields from users? Well, you find out by validating the system through testing.

The whole point of having test data in software delivery is to make sure that the software performs as expected. Additionally, you need to make sure that the same tests will pass in production and have no loopholes that could damage the organization’s reputation. Therefore, test data becomes a critical part of software delivery life cycle, as it helps to identify errors and logical problems in the system. Thanks to this, you can make fixes before releasing the software.

For example, imagine a loaning system that makes incorrect calculations by increasing the interest rate by a certain percentage. That can be unfair to the borrowers and can backfire for the lending company.

Earning Trust

Trust is earned, and if you want to earn it from the end users or management, you have to deliver a software product that’s bug-free and works as expected. In fact, every software development and testing team should utilize test data management. Test data management enables teams to deliver software products that stand out and earn trust from management. After all, you can’t ship an error-prone system to the market and expect happy users.

Why Test Data Management Matters

Well, the whole essence of test data management is to make sure that you test the software in all scenarios, ensuring that the software will not fail in production.

By testing with data that’s as realistic as production data, you gain assurance that the software application will function as expected in a production environment. This strengthens the organization’s relationship with clients that will be using the system because it will have fewer bugs.

Another benefit is the speed of software delivery. Test data management speeds the time of production because testing takes place as the software is developed. That way, you detect errors at an early stage of the software development life cycle and fix them before the release.

This reduces the chances of fixing bugs in production and rollbacks. The earlier you detect bugs, the easier and cheaper it is to fix those bugs. This also helps the organization’s compliance and security risks.

Test data management also reduces costs, as it speeds up the process of the software development life cycle. Money needs speed, and the market is always changing. Without test data management, bugs might delay the release of your software product. As a result, you might end up releasing the software only when it is out of market demand.

Summary

In simple terms, test data is simply the data used to test a software application that’s under the software testing life cycle. Test data management, on the other hand, is the actual process of administering data that’s necessary for use in the software development test life cycle.

You can’t deny that test data management is an essential part of testing and developing software. It plays a crucial role in helping you produce high quality software that’s bug-free and works as expected.

You should take test data management seriously and apply it when delivering software. If you do so, your organization will gain more revenue because you’ll deliver higher quality software products. Higher quality products make the customers happy instead of giving them a reason to complain about some bug.

Author

This post was written by Mathews Musukuma. Mathews is a software engineer with experience in web and application development. Some of his skills include Python/Django, JavaScript, and Ionic Framework. Over time, Mathews has also developed interest in technical content writing.

What are Test Data Gold Copies

What Are Data Test Gold Copies and Why You Need Them

You lean back in your chair with a satisfied grin. You did it. It wasn’t easy, but you did it. You diagnosed and fixed the bug that kept defying your team. And you have the unit tests to prove it.

The grin slowly fades from your face as you realize that you still need your code to pass the integration tests. And you need to get data to use in them. Not your favorite activity.

You can put that grin back on your face because there is another way: using a gold copy.

Read on to learn what a gold copy is and why you want to use one. You will also find out how it can help you work on an application with low test coverage. You know, the dreaded legacy systems.

What Is a Gold Copy

In essence, a gold copy is a set of test data. Nothing more, nothing less. What sets it apart from other sets of test data is the way you use and guard it.

  • You only change a gold copy when you need to add or remove test cases.
  • You use a gold copy to set up the initial state of a test environment.
  • All automated and manual tests work on copies of the gold copy.

A gold copy also functions as the gold standard for all your tests and for everybody testing your application. It contains the data for all the test cases that you need to cover all the features of your product. It may not start out as comprehensive, but that’s the goal.

Building a comprehensive gold copy isn’t easy or quick. But it’s definitely worth it, and it trumps using production data almost every time.

Why You Don’t Want to Test in Production

Continuous delivery adepts rave about testing in production. And yes, that has enormous benefits. However:

  • It requires the use of feature toggles to restrict access to new features and changed functionality.
  • Running the automated tests in your builds against a production environment is not going to make you any friends.
  • The sheer volume of production data usually is prohibitive for a timely feedback loop.
  • Giving developers access to production data can violate privacy and other data regulations.

There’s more:

  • Production data changes all the time, and its values are unpredictable, which makes it unsuitable as a base for automated testing.
  • Finding appropriate test data in production is a challenge. Testing requires edge cases, when users and thus their data tend to be much more alike than they would like to know.
  • To comply with privacy and other data regulations, extracts need to be anonymized and masked.

Contrived Test Data Isn’t Half as Bad as It Sounds

Contrived examples usually mean that you wouldn’t encounter the example in the real world. However, when it comes to testing, contrived is what you want. A contrived set of test data:

  • has only one purpose—verifying that your application works as intended and expected and that code changes do not cause regressions
  • contains a limited amount of data, enabling a faster feedback loop even for end-to-end tests
  • can be made to be self-identifying and self-descriptive to help understand what specific data is meant to test
  • contains edge cases that willtrip you up in the real world but are generally absent from production data by their very definition
  • can be built into a comprehensive, optimized, targeted set of data that fully exercises your application

Of course, production data can be manipulated to achieve the same. But extracting it stresses production, and manipulating it takes time and effort. And you really don’t want to be doing that again and again and again.

That’s why you combine contrived data and gold copies. You start your gold copy with an extract from production data that is of course anonymized and otherwise made to conform to privacy and data regulations. Over time, you manipulate it into that optimized, targeted set of data. But using that initial set of test data as a gold copy will bring you benefits immediately.

Benefits of Gold Copies

In addition to the benefits of contrived data, using a gold copy gets you these benefits:

  • You can easily set up a test environment with a comprehensive set of test data
  • You can easily revert the data in a test environment to its original state
  • The ability to automate spinning up test environments
  • Automated regression testing for legacy systems

Everyone working on your application will appreciate it. They no longer have to hunt for good data to use in their test cases. And they no longer have to create test data themselves. A good thing, because creating test data and tests that produce false positives (i.e., tests that succeed when they should fail) is incredibly easy. You only have to use the same values a tad too often.

The ability to automate spinning up a test environment is what makes using a gold copy so invaluable for large development shops and shops that need to support many different platforms. Just imagine how much time and effort can be saved when providing teams and individuals with comprehensive, standard test data that can be automated. For example, using containers and a test data management tool like Enov8’s.

Finally, gold copies can help reduce the headaches and anxiety of working with legacy code. Here’s how.

Slaying the Dreaded Legacy Monster

Any system that does not have enough automated unit and integration tests guarding it against regressions is a legacy system. They are hard to change without worrying.

The lack of tests, especially the lack of unit tests, allowed coding practices that now make it hard to bring a legacy system under test. Because bringing it under test requires refactoring the code. And you can’t refactor with any confidence if you have no tests to tell you if you broke something.

Fortunately, a gold copy can bail you out of this one. It allows you to add automated regression testing by using the golden master technique. That technique takes advantage of the fact that any application with value to its users produces all kinds of output.

Steps in the Golden Master Technique

How you implement the golden master technique depends on your environment. But it always follows the same pattern, and it always starts with a gold copy.

  1. Use your current code against the gold copy to generate the output you want to guard against regressions. For example, a CSV export of an order, a PDF print of that order, or even a screenshot of it.
  2. Save that output. It’s your golden master.
  3. Make your changes.
  4. Use your new code against the gold copy to generate the “output under test” again.
  5. Compare the output you just generated to your golden master.
  6. Look for and explain any differences.

If you were refactoring, which by definition means there were no functional changes, the comparison should show that there are no differences.

If you were fixing a bug, the comparison should show a difference. The golden master would have the incorrect value, while the output from the fixed code would have the correct value. No other differences should be found.

If you were changing functionality, you can expect a lot of differences. All of them should be explicable by the change in functionality. Any differences that cannot be explained that way are regressions.

Explaining the differences requires manual assessment by a human. It’s known as the “Guru Checks Output” anti-pattern. And it needs to be done every test run if you want to stay on top of things. Marking differences as expected can help. Especially when you can customize the comparison so it won’t report them as differences.

Go Get Yourself Some Gold

Now that you know what a gold copy is and how you can use it to your advantage, it’s time for action. It’s time to start building toward the goal of a comprehensive set of test data and use it as a gold copy.

Your first step is simple: save the data from the test environment you set up for the issue or feature you’re working on now. That is going to be your gold copy. If your application uses any kind of SQL database, you could use that to generate a DML-SQL script that you can add to a repository.

Use your gold copy to set up the test environment for your next issue. Make sure you don’t (inadvertently) change your gold copy while you’re working on that issue. When you’re finished, and if you needed to add test data for the test cases of this issue, update your gold copy.

Rinse and repeat, and soon enough you’ll be well on your way to a truly useful comprehensive set of test data.

Author: Marjan Venema

This post was written by Marjan Venema. Marjan's specialty is writing engaging copy that takes the terror out of tech: making complicated and complex topics easy to understand and consume. You’ll find samples on her portfolio. Her content is optimized for search engines, attracting more organic traffic for small businesses and independent professionals in IT and other Tech industries, that she’ll also help with content audits and strategy.

How Many Test Environments Do I Need? 

Having a set of test environments properly configured and managed is essential for modern software organizations. Creating and configuring such environments is part of a solid test environment management strategy. Unfortunately, as with many things in software development, this is easier said than done. There are many questions that need answering. For instance: how many test environments do I need?

 

How-Many-Environments

The short, correct, but also totally frustrating answer is—you’ve guessed it—it depends. Like most things in our industry, there isn’t a one-size-fits-all solution.

 

This post features a longer, (hopefully) not frustrating version of the answer above. Answering “it depends” without explaining which things it depends on makes for a useless answer, so we won’t do that. Instead, we’ll cover the factors you have to take into account when making the decision on how many environments your organization needs. The most obvious one is probably organization size, but, as you’ll see, it’s not the only one.

Let’s begin.

What Are Test Environments?

Before we get into the factors we’ve mentioned, we have some explaining to do. Or, rather, some defining. In this section, we’ll define test environments. You’ll learn what they are and why do you need them.

Of course, if you’re already experienced in managing test environments—or have enough familiarity with the term—feel free to skip to the next section with a clear conscience.

A testing environment is a setup of software, hardware, and data that allows your testing professionals to execute test cases. For the test environment to be effective, you have to configure it, so it closely resembles the production environment.

As we’ve already covered, there are many types of test environments. Which ones your organization will need depends on several factors, such as the test cases itself, the type of the software under test, and many more. Since that’s the main topic of this post, we’ll get there in a minute.

But first, let’s quickly cover some of the main types of test environments available.

How Many Test Environments Do I Need? The Bare Minimum

We’re about to cover the main factors for deciding which and how many environments your organization should adopt. Before we get there, though, let’s talk about the bare minimum number of environments you need.

Development

The first obvious and indispensable one is the development environment. For some of you, it might sound weird to think of the dev environment as a testing environment, but it is. Developers should constantly test the code they write, not only manually (via building the application and performing quick manual tests) but also automatically, through unit tests.

You might consider the development environment an exception in the sense that, unlike most other environments, it doesn’t need to mimic production too closely. For instance, I have seen people argue that developers that create desktop apps shouldn’t use the best machines available. Instead, they should adopt computers that are close in configuration to those their clients use, so they can feel how the software is going to run. That’s nonsense. Developers should use the better and fastest machines their companies can afford, so their work is done most effectively. If performance is an issue, there should be a performance testing phase (and environment) to handle that.  The same goes for other characteristics of the production environment that don’t make sense for developers.

CI (Integration)

What I’m calling here the “CI environment” could also be simply called the test environment, or even integration test environment.

Here is the first step in the CI pipeline after the developer commits their code and push it to the server. The CI server builds the application, running whatever additional steps are adequate, such as documentation generation, version number bumping, and so on. Just building the code is already a type of test. It might help detect dependency issues, eliminating the “but it works on my machine!” problem.

If the application is successfully built, unit/integration tests are executed. This step is vital since it might be slow for developers to run all of the existing tests often in their environments. Instead, they might run only a subset of tests on their environments, and the CI server will take care of running the whole suite after each check-in/push.

QA

Then we have what we’ll call the QA environment. Here is where end-to-end tests are run, manually, automatically, or both. End-to-end testing, also called functional tests, are the types of tests that exercise the whole application, from the UI to the database and back again. This type of testing checks whether the integration between different modules of the software work, as well as the integrations between the software and external concerns, such as the database, network, and the filesystem. As such, it’s a really essential type of testing for most types of software.

Production

Finally, we have the production environment. For many years “testing in production” was seen as the worst sin of testing. Not anymore. Testing is production is not only forgivable but desirable. Practices like canary releases are vital for companies that deploy several times a day since it allows them to achieve shorter release cycles while keeping the high quality of the application. A/B testing can also be seen as a form of testing in production, and it’s essential for organizations that need to learn about their users’ experience when using their software. Finally, some forms of testing, like load testing, would be useless if performed on any environment other than production.

Which and How Many Environments Do You Need? Here Are the Criteria You Should Use to Decide

Having covered the bare minimum environments most organizations need, it’s time to move on. Now we’ll cover the main factors you need to weigh when deciding your testing approach. Let’s go.

Organization Size

The size of the organization matters when deciding which environments it needs. One of the ways this matter is in regards to personnel. Since larger companies have more people, they can afford to have entire teams or even departments dedicated to designing, performing, and maintaining certain types of testing, which includes taking care of the required environment.

Companies of different sizes also have different testing needs due to the software they create. It’s likely that larger companies produce more complex software, which would demand a larger pipeline. The inverse is also likely true for smaller companies.

Finally, organization size often correlates with the stage in which the company finds itself. That’s what we’re covering next.

Organization’s Life Phase

Do you remember when Facebook’s motto was, “Move fast and break things?”  It’s been a few years since they changed it to “Move fast, with stable infra.” While the new motto is definitely not as catchy as the previous one—some might say it’s even boring—it makes sense, given where the company stands now.

Startups have different testing needs than most established companies. Their priorities aren’t the same since they’re at very different points in their lifecycles.

For startups, beating their competitors to market might be more valuable than releasing flawless products. Established companies, on the other hand, will probably place “stability in the long term” way higher in the scale. They have their reputation at the stakes. If they’re public, they have to generate results for shareholders.

Therefore, more established companies will usually employ a testing strategy that adopts more environment, and it’s probably more expensive, and definitely slower. But such a strategy might give them the reassurance they need. On the other hand, startups that value time to market might choose a more streamlined pipeline, with fewer environments. Such an approach might be cheaper, easier to build and manage, but will give fewer guarantees than the more heavy-weight approach of the enterprise.

Software Type

The type of software developed is a huge factor when it comes to testing. A database-based web application with a rich user interface will require UI and end-to-end testing, for instance, while a library will not.

Similarly, user-acceptance testing makes sense for applications targeted at final users. For libraries and frameworks, unit and integration tests might suffice. You might have even more specific needs, such as integration with custom hardware, which can require more environments.

The type of software will dictate the required types of testing, which, in turn, will help you decide on the environments.

Domain or Industry

Some industries are highly regulated, while others are less regulated or non-regulated at all. That also has a huge impact on an organization’s testing approach. Domains like financial services and healthcare come to mind.

Your company might need to adhere to rules, regulations, or norms that govern whatever industry it operates in. That might require you have an additional environment in order to test that the product is compliant with these rules.

Time for the Verdict

So, based on all that we’ve just seen. How does one choose which test environments their organization needs? We’ll now, as promised in the title, offer you a quick recipe, or a step-by-step guide.

  1. Start with the basics. Meaning, start with the bare minimum environments we’ve mentioned and then build upon it as your requirements change.
  2. Consider the organization’s size and stage in life. Take into account the values and priorities of the organization (time to market vs. stability, disruption vs. market share, etc.), available personnel, and budget.
  3. Take into account the type of software you make and the industry you belong to.

With that in mind, make your decision. If your organization makes a picture editing app for Android and iOs, you might want to have (besides the obvious dev and prod):

  • The CI environment to perform unit and integration tests.
  • A QA environment to help you with end-to-end/integration tests, using both emulation and real devices.
  • An acceptance testing environment, where stakeholders give the final sign-off for the app’s release.

But if you’re creating a banking application, you could add an additional security and compliance environment. (Keep in mind that this is just an example. I’m not well-acquainted with the financial domain.)

Final Considerations

Test environment management is vital for the modern software delivery process. One of the decisions a test environment manager needs to make is how many environments to use. As you’ve seen, there is no one-size-fits-all answer, but that’s no reason to despair. There are objective criteria you should use to help you with your decision.

The journey isn’t easy, but this blog has many articles that can help you master test environment management and take your organization’s testing approach to new levels.

Author

This post was written by Carlos Schults. Carlos is a .NET software developer with experience in both desktop and web development, and he’s now trying his hand at mobile. He has a passion for writing clean and concise code, and he’s interested in practices that help you improve app health, such as code review, automated testing, and continuous build.

Types-of-Test-Environments

Types of Testing Environments

Today, we’re talking about types of testing environments. But first, let’s establish some basic definitions.

Software testing is a process that verifies that the software works as expected in test environments. The verification is done through a set of automated or manual steps called test cases.

A test environment is a combination of hardware, software, data, and configuration that’s required to execute test cases. You have to be sure to configure the testing environments to mimic production scenarios.

Types of Test Environments

There are many types of test environments. Which ones you’ll need depends on the test cases and the application under test. A thick-client desktop application serves a different need than a web application does. As a result, the test environments required for a desktop application are different than those for a web application.

This post is a complete guide on types of testing environments and how often they’re used. The post also explains how testing environments fit into the pace of modern software development practices.

1. Integration Testing Environment

The first on our list of testing environment types is the integration testing environment. 

In this type of environment, you integrate the individual software modules and then verify the behavior of the integrated system. A set of integration tests are used to check that the system behaves as specified in the requirements document. In an integration testing environment, you can integrate one or more modules of your application and verify the functional correctness.

The environment setup depends on the type of application and the components being tested. Setting up this environment usually involves ensuring the availability of the right hardware, the right software version, and the right configuration. Integration testing environments should mimic production scenarios as closely as possible. This includes the configuration and management of application servers, web servers, databases, and all the infrastructure needs of the application.

With the modern DevOps approach to software development, where continuous testing is a norm, an integration testing environment will probably be used daily or multiple times a day. Therefore, the ability to recreate the environment at will is paramount to an effective software delivery process.

2. Performance Testing Environment

Next on our list is a performance testing environment. You use this environment to determine how well a system performs against performance goals. The performance goals in question can be concurrency, throughput, response time, and stability.

Performance testing is a very broad term and usually includes volume, load, stress, and breakpoint testing. A good performance testing environment plays a crucial role in benchmarking and identifying bottlenecks in the system.

The setup of a performance testing environment can be fairly complex. It requires the careful selection and configuration of the infrastructure. You’ll run your performance tests on multiple environments with a different configuration that varies by

  • Number of CPU cores,
  • Size of RAM,
  • Concurrent users,
  • Volume of data,

You’ll then document and publish the results as system benchmarks and compare this with the performance goals of the software.

After that, in a performance testing environment, the software teams take a closer look at the system behavior and related events such as scaling and alerting. From there, they’ll carefully tune them if needed.

Performance tests are usually time-consuming and expensive. Therefore, setting up performance testing environments and running these tests for every change can be counterproductive and is usually not recommended. That’s why software teams only run these performance tests on a per-requirement basis, which could be once a month, for every major release, or whenever there are significant changes in the application.

3. Security Testing Environment

Let’s now discuss security testing environments. When working with this type of environment, security teams try to ensure that the software doesn’t have security flaws and vulnerabilities in the areas of confidentiality, integrity, authentication, authorization, and non-repudiation.

Organizations usually engage a combination of internal and external (from a different organization) security experts who specialize in identifying security vulnerabilities in software. During this process, it’s crucial to establish a thorough scope that defines exactly which systems will be targeted, which methods will be used, and when the assessment will take place.

As part of a good security testing environment setup procedure, you’ll want to establish some ground rules, such as

  • Have an isolated test environment.
  • Have non-disclosure agreements in place.
  • Don’t leave the system in a worse state.
  • Don’t touch production data.

This is especially applicable when engaging external security companies.

Different parts of security tests can happen at different frequencies and different stages of the software delivery process. A successful software team usually executes vulnerability assessments, scans, audits, and any other non-invasive tests more frequently when compared to invasive tests like penetration tests. Automating security tests that are non-invasive and running them as often as possible, perhaps alongside integration tests, helps maintain a security baseline.

On the other hand, executing advanced invasive tests requires a good understanding of the software and the potential attack surfaces. Carrying out sophisticated attacks on the software by penetration testing requires the expertise of the security specialists. This is not something that you can easily automate, and it requires a lot of effort. Therefore, you’ll run these tests less frequently.

4. Chaos Testing Environment

According to the book Chaos Engineering, “Chaos engineering is the discipline of experimenting on a system to build confidence in the system’s capability to withstand turbulent conditions in production.”

Understanding how the failures of individual parts of the system can potentially cascade and ruin the whole system is the ultimate goal of chaos testing. By using fault injection techniques, software teams build an in-depth understanding of critical dependencies of their system and how software fails.

With that definition in mind, let’s talk about the final environment on our list: the chaos testing environment.

If you have a modern web application with a microservice architecture, where different independent services make up the application, then setting up a reliable chaos testing environment is crucial. These environments must be set up in the same way as your production environments are, and they must be configured for scale and high availability.

Having an environment to test the high-availability, disaster recovery, and business continuity provisions configured in each service crucial to improving the reliability of your whole system. It’s equally important to test how the dependent services behave in these failure modes. Disaster recovery drills or game days are excellent opportunities to run these tests and identify the potential weak links in modern, large-scale applications. Software teams usually run the chaos experiments less frequently and mostly alongside the performance tests.

Other Considerations

Finally, I’d like to close out with some other considerations you should take into account:

  • While there are other types of tests, such as usability testing, accessibility testing, and testing for internationalization and localization, these tests don’t need a separate testing environment. They can reuse the integration testing environment or any of the other setups.
  • The number of test environments you have to manage also depends on the number of platforms that the software needs to support and be compatible with. Factors such as supported operating systems, processor architectures, and different screen sizes all come into play.
  • There is, of course, no place like production, which in itself is the ultimate test environment for any application. Product teams engage in the responsible collection of user data in production. This helps product teams to collect telemetry data about how users engage with their applications. Consequently, they use practices like A/B testing and feature toggles to improve their chances of success.
  • The data used in different environments also needs to be realistic. Having tools to back up and simultaneously anonymize and hide personally identifiable data can be very useful in testing scenarios.

Managing Test Environments

Test environment management is a crucial aspect of the software delivery process. Incorrect environment setup leads to inconsistent test results. This leads to friction and blame among the stakeholders, who ultimately lose confidence in the test results.

This post described the commonly used test environments and things to consider when setting up and managing them. The ability to spin up testing environments on demand is crucial to successfully managing your test environments. You can read more on this topic in our post called “Are you TEM Savvy,” which is an excellent piece full of useful tips on managing reliable and consistent test environments.

 

Author

This post was written by Gurucharan Subramani. Gurucharan is a software engineer who likes to get .NET, Azure, and Azure DevOps to not just meet but to also dance. Some days, Guru is a dev; other days, he's ops. And he's frequently many things in between. He's a community advocate who leads the Bangalore Azure User Group and is a member of the .NET Foundation.

Which TDM Method is Best

Which Test Data Management Method Is Best?

Introduction

Setting up a great test data management strategy is a crucial step for taking your test automation process to its fullest potential. However, many software professionals are still not familiar with the concept of test data management (TDM). Even those that are familiar with TDM might have a hard time putting it in practice. Why is that?

 

Which TDM Method is Best

When it comes to test data management, the “what” is relatively straightforward, but we can’t say the same about the “how.” As it turns out, there are several competing methods of managing test data. Which one should you choose? As you’ll see in this post, this isn’t a one-approach-fits-all kind of situation. Each method has its unique strengths and weaknesses and might be more or less appropriate for your use case.

Today’s post will cover some of the existing test data management approaches, listing the advantages and disadvantages of each one. Let’s get started.

Replicating Data From Production

The first approach we’re going to cover in this post is perhaps the most popular one, at least for beginners. And that makes perfect sense if you think about it. When you first encounter the challenge of coming up with data to feed your testing processes, it isn’t too far-fetched to think you should just copy data from production and be done with it. It’s the easiest way to obtain data that is as realistic as possible. You just can’t get more real than production.

Not everything is a bed of roses when it comes to production data replication. Quite the opposite, actually. The easy access to data is pretty much the only advantage this method has. And what about the disadvantages? These, sadly, abound.

Here Be Dragons: Some Downsides of the Approach

Here’s the first problem: replicating data from production continues to be mostly a manual process. Sure, you can come up with scripts and automated jobs to do most of the heavy lifting for you. But keep in mind that generating the data isn’t the whole job of a TDM management solution. “Availability” is an integral part of the package. That means that the TDM tool is responsible for making sure the data is available where it’s needed, at the right time. A naive approach based on scripts might not be sufficient to manage the demands of a complex testing process, forcing you to rely on a manual process to do so.

Secondly, production replication doesn’t lend itself well to negative test cases. It’d be out of the scope of this post to give a lengthy explanation of negative testing. In a nutshell, negative test cases are tests that validate the system against invalid data. Basically, you throw faulty data at your application to check how well it can handle it. Since production data would (hopefully) be in good shape, this approach isn’t well suited to this type of testing.

Production data replication also doesn’t work…if there is not data replication for you to replicate in the first place! What should you do when you need to test an application that is still in the alpha stage of development or even a prototype? Since no one is actually using the application, there would be no production data for you to copy. That’s a severe downside of this approach since every new application will face this problem.

Here Be Dragons (For Real): Legal Implications

Finally, we have the most serious downside of this approach—data sensitivity. Data compliance is a crucial part of the modern IT landscape since companies are responsible for the data they store and manipulate. It’s up to them to protect their client’s data, ensuring it’s not abused. When replicating data from production, software organizations run the risk of failing to comply with privacy acts, such as GDPR. And that can bring catastrophic consequences, legal, financial, and reputation-wise.

Data Masking

In order to solve the downsides of production data replication (a.k.a the naive approach), test data management tools have come up with more sophisticated methods. One of the
most popular of these approaches is test data masking. As its name implies, tools that adopt this approach enable its users to apply masks to production data. Such masks will remove personally identifiable information (PII) from the data.

Data masking is an improvement over naive production data replication, for sure. But the approach is not without its downsides.

First, consider the “time” variable. Data masking doesn’t reduce the time spent generating (or rather, copying) the data for testing. On the contrary, it increases it because now you have a new added in the process. You could argue—and I’d gladly agree—that it’s time well spent, but it’s more time nonetheless.

Then, you also have to keep in mind that data masking isn’t a standalone approach on its own. Instead, it complements the previous approach by solving one of its more serious issues. The problem is data masking can’t fix every problem that the production replication approach has. For instance, if you intend to test an application still in development, for which there is no production data at all, data masking is powerless to help you.

Synthetic Data Generation

Synthetic data generation is yet another method of test data management. As its name suggests, this approach consists of generating “fake”—or synthetic—data from a data model. Tools that implement this approach are able to preserve the format of the data. The values themselves, though, are completely disconnected from any original data. What does that imply?

The implication of this is that synthetic data generation’s greatest asset is simultaneously its most significant downside. By populating the database with entirely “made-up” values, the approach dramatically reduces (virtually eliminates) the risk of exposing sensitive data. On the other hand, depending on the tool’s sophistication—or lack of—you might end up with data that feels “fake-y.” One of the goals of an excellent TDM strategy is to provide data that is as production-like as possible.

To wrap-up, let’s talk about the biggest advantage of synthetic data generation, namely: speed. Once you have a model in place, you can quickly generate data from it, effectively eliminating the time delays that plague other approaches.

Test Data Management Is More Than Test Data Generation

In this post, we’ve covered some of the most used approaches to generate test data. The list is definitely not exhaustive; there are many more methods that we didn’t cover. However, many of them are variations or combinations of the approaches we did talk about.

Another thing to keep in mind is that test data management is much more than just generating test data. TDM is responsible for ensuring the quality of the test data, its availability, and also its security. In other words: the data must be good, and it must be available at the right place, at the right time. And bad actors shouldn’t be allowed to expose it or misuse it in any way. That’s why, depending on the needs of your organization, you should consider adopting a full-fledged data compliance solution, which can not only supply your data generation needs but also make sure your data adhere to the compliance requirements you must follow.

Author Carlos Schults

This post was written by Carlos Schults. Carlos is a .NET software developer with experience in both desktop and web development, and he’s now trying his hand at mobile. He has a passion for writing clean and concise code, and he’s interested in practices that help you improve app health, such as code review, automated testing, and continuous build.