Why Test Data Management Is Critical to Software Delivery?

Imagine you are developing a system that will be used by millions of people. In a situation like this, a system has to be very well-tested for any type of error that can cause the system to break while in production. But what’s the best way to test a system for any possible system failure because of bugs? This is where test data management comes in.

In this post, I will explain why test data management is critical in software delivery. To develop high quality software products, you have to continuously test the system as it’s being developed. Let’s dive in straight to understanding how this problem can be solved by using test data management.

 

What Is Test Data Management?

Well, in simple terms, test data management is the creation of data sets that are similar to the actual data in the organization’s production environment. Software engineers and testers then utilize this data to test and validate the quality of systems under development.

Now, you might be wondering why you need to create new data. Why not just use the existing production data? Well, data is essential to your organization, so you should protect it at all costs. That means developers and testers shouldn’t have access to it. This has nothing to do with the issue of trust but security. Data should be highly regarded, or else there can be a data breach. And as you know, data breaches can cause loss to an organization.

How Can You Create Test Data?

So, now that we know why we need test data that is separate from our production data, how can we create it?

The first thing you must do is understand the type of the business you are dealing with. More specifically, you need to know how your software product will work and the type of end users that will use the software. By doing so, it will be easier to prepare test data. Keep in mind that test data has to be as realistic as the actual data in the production environment.

You can use automated tools to generate test data. Another way of creating test data is by copying or masking production data that your actual end users will use. Here you have to be creative as well and create different types of test data sets. You can’t rely only on the masked data from production data for testing.

Benefits of Test Data Management in Software Delivery

Test data management has many benefits in software delivery. Here are some of the benefits of test data management in software delivery in any software development environment.

High Quality Software Delivery

When you apply test data management to software delivery, it will give software developers and testers to test the systems and make solid validations of the software. This enhances the security of the system and can prevent possible failures of the system in the production environment. Testing systems with test data gives assurance that the system will perform as expected in the production environment without defects or bugs.

Faster Production and Delivery of Software Products to the Market

Imagine that, after some months of hard work of developing a software application, you’ve just released a software application on the market, only for it to fail at the market level. That’s not only a loss of resources, but it’s also a pain.

A system that’s well-tested using test data will have a shorter production time and excel at the production level. That’s because it’s much more likely to perform the way it was intended to. If the system fails to perform in production because it was not tested well, then the system has to be redone. This wastes time and resources for the organization.

Money Needs Speed

Test data management is critical when it comes to software delivery speed. Having data that’s of good quality and is similar to production data makes development easier and faster. System efficiency is cardinal for any organization, and test data management assures that a system will be efficient when released in production. Therefore, you start generating revenue as soon as you deploy the system.

Imagine having to redo a system after release because users discover some bugs. That can waste a lot of time and resources, and you may also lose the market for that product.

Testing With the Correct Test Data

Testing with good quality test data will help in making sure that the tests you run in the development phase will not change the behavior of the application in the production phase. For example, you might test that the system is accepting supported data by entering a username and password in the text box with all types of data that a user can possibly input into the system.

No matter how many times you test the software, if the test data is not correct, you should expect the software to fail in the production phase. This is why it is always important to ensure that test data is of great quality and resembles your actual production data.

Bug and Logical Fixes

How can you know that the text box is accepting invalid input such as unsupported characters or blank fields from users? Well, you find out by validating the system through testing.

The whole point of having test data in software delivery is to make sure that the software performs as expected. Additionally, you need to make sure that the same tests will pass in production and have no loopholes that could damage the organization’s reputation. Therefore, test data becomes a critical part of software delivery life cycle, as it helps to identify errors and logical problems in the system. Thanks to this, you can make fixes before releasing the software.

For example, imagine a loaning system that makes incorrect calculations by increasing the interest rate by a certain percentage. That can be unfair to the borrowers and can backfire for the lending company.

Earning Trust

Trust is earned, and if you want to earn it from the end users or management, you have to deliver a software product that’s bug-free and works as expected. In fact, every software development and testing team should utilize test data management. Test data management enables teams to deliver software products that stand out and earn trust from management. After all, you can’t ship an error-prone system to the market and expect happy users.

Why Test Data Management Matters

Well, the whole essence of test data management is to make sure that you test the software in all scenarios, ensuring that the software will not fail in production.

By testing with data that’s as realistic as production data, you gain assurance that the software application will function as expected in a production environment. This strengthens the organization’s relationship with clients that will be using the system because it will have fewer bugs.

Another benefit is the speed of software delivery. Test data management speeds the time of production because testing takes place as the software is developed. That way, you detect errors at an early stage of the software development life cycle and fix them before the release.

This reduces the chances of fixing bugs in production and rollbacks. The earlier you detect bugs, the easier and cheaper it is to fix those bugs. This also helps the organization’s compliance and security risks.

Test data management also reduces costs, as it speeds up the process of the software development life cycle. Money needs speed, and the market is always changing. Without test data management, bugs might delay the release of your software product. As a result, you might end up releasing the software only when it is out of market demand.

Summary

In simple terms, test data is simply the data used to test a software application that’s under the software testing life cycle. Test data management, on the other hand, is the actual process of administering data that’s necessary for use in the software development test life cycle.

You can’t deny that test data management is an essential part of testing and developing software. It plays a crucial role in helping you produce high quality software that’s bug-free and works as expected.

You should take test data management seriously and apply it when delivering software. If you do so, your organization will gain more revenue because you’ll deliver higher quality software products. Higher quality products make the customers happy instead of giving them a reason to complain about some bug.

Author

This post was written by Mathews Musukuma. Mathews is a software engineer with experience in web and application development. Some of his skills include Python/Django, JavaScript, and Ionic Framework. Over time, Mathews has also developed interest in technical content writing.

What are Test Data Gold Copies

What Are Data Test Gold Copies and Why You Need Them

You lean back in your chair with a satisfied grin. You did it. It wasn’t easy, but you did it. You diagnosed and fixed the bug that kept defying your team. And you have the unit tests to prove it.

The grin slowly fades from your face as you realize that you still need your code to pass the integration tests. And you need to get data to use in them. Not your favorite activity.

You can put that grin back on your face because there is another way: using a gold copy.

Read on to learn what a gold copy is and why you want to use one. You will also find out how it can help you work on an application with low test coverage. You know, the dreaded legacy systems.

What Is a Gold Copy

In essence, a gold copy is a set of test data. Nothing more, nothing less. What sets it apart from other sets of test data is the way you use and guard it.

  • You only change a gold copy when you need to add or remove test cases.
  • You use a gold copy to set up the initial state of a test environment.
  • All automated and manual tests work on copies of the gold copy.

A gold copy also functions as the gold standard for all your tests and for everybody testing your application. It contains the data for all the test cases that you need to cover all the features of your product. It may not start out as comprehensive, but that’s the goal.

Building a comprehensive gold copy isn’t easy or quick. But it’s definitely worth it, and it trumps using production data almost every time.

Why You Don’t Want to Test in Production

Continuous delivery adepts rave about testing in production. And yes, that has enormous benefits. However:

  • It requires the use of feature toggles to restrict access to new features and changed functionality.
  • Running the automated tests in your builds against a production environment is not going to make you any friends.
  • The sheer volume of production data usually is prohibitive for a timely feedback loop.
  • Giving developers access to production data can violate privacy and other data regulations.

There’s more:

  • Production data changes all the time, and its values are unpredictable, which makes it unsuitable as a base for automated testing.
  • Finding appropriate test data in production is a challenge. Testing requires edge cases, when users and thus their data tend to be much more alike than they would like to know.
  • To comply with privacy and other data regulations, extracts need to be anonymized and masked.

Contrived Test Data Isn’t Half as Bad as It Sounds

Contrived examples usually mean that you wouldn’t encounter the example in the real world. However, when it comes to testing, contrived is what you want. A contrived set of test data:

  • has only one purpose—verifying that your application works as intended and expected and that code changes do not cause regressions
  • contains a limited amount of data, enabling a faster feedback loop even for end-to-end tests
  • can be made to be self-identifying and self-descriptive to help understand what specific data is meant to test
  • contains edge cases that willtrip you up in the real world but are generally absent from production data by their very definition
  • can be built into a comprehensive, optimized, targeted set of data that fully exercises your application

Of course, production data can be manipulated to achieve the same. But extracting it stresses production, and manipulating it takes time and effort. And you really don’t want to be doing that again and again and again.

That’s why you combine contrived data and gold copies. You start your gold copy with an extract from production data that is of course anonymized and otherwise made to conform to privacy and data regulations. Over time, you manipulate it into that optimized, targeted set of data. But using that initial set of test data as a gold copy will bring you benefits immediately.

Benefits of Gold Copies

In addition to the benefits of contrived data, using a gold copy gets you these benefits:

  • You can easily set up a test environment with a comprehensive set of test data
  • You can easily revert the data in a test environment to its original state
  • The ability to automate spinning up test environments
  • Automated regression testing for legacy systems

Everyone working on your application will appreciate it. They no longer have to hunt for good data to use in their test cases. And they no longer have to create test data themselves. A good thing, because creating test data and tests that produce false positives (i.e., tests that succeed when they should fail) is incredibly easy. You only have to use the same values a tad too often.

The ability to automate spinning up a test environment is what makes using a gold copy so invaluable for large development shops and shops that need to support many different platforms. Just imagine how much time and effort can be saved when providing teams and individuals with comprehensive, standard test data that can be automated. For example, using containers and a test data management tool like Enov8’s.

Finally, gold copies can help reduce the headaches and anxiety of working with legacy code. Here’s how.

Slaying the Dreaded Legacy Monster

Any system that does not have enough automated unit and integration tests guarding it against regressions is a legacy system. They are hard to change without worrying.

The lack of tests, especially the lack of unit tests, allowed coding practices that now make it hard to bring a legacy system under test. Because bringing it under test requires refactoring the code. And you can’t refactor with any confidence if you have no tests to tell you if you broke something.

Fortunately, a gold copy can bail you out of this one. It allows you to add automated regression testing by using the golden master technique. That technique takes advantage of the fact that any application with value to its users produces all kinds of output.

Steps in the Golden Master Technique

How you implement the golden master technique depends on your environment. But it always follows the same pattern, and it always starts with a gold copy.

  1. Use your current code against the gold copy to generate the output you want to guard against regressions. For example, a CSV export of an order, a PDF print of that order, or even a screenshot of it.
  2. Save that output. It’s your golden master.
  3. Make your changes.
  4. Use your new code against the gold copy to generate the “output under test” again.
  5. Compare the output you just generated to your golden master.
  6. Look for and explain any differences.

If you were refactoring, which by definition means there were no functional changes, the comparison should show that there are no differences.

If you were fixing a bug, the comparison should show a difference. The golden master would have the incorrect value, while the output from the fixed code would have the correct value. No other differences should be found.

If you were changing functionality, you can expect a lot of differences. All of them should be explicable by the change in functionality. Any differences that cannot be explained that way are regressions.

Explaining the differences requires manual assessment by a human. It’s known as the “Guru Checks Output” anti-pattern. And it needs to be done every test run if you want to stay on top of things. Marking differences as expected can help. Especially when you can customize the comparison so it won’t report them as differences.

Go Get Yourself Some Gold

Now that you know what a gold copy is and how you can use it to your advantage, it’s time for action. It’s time to start building toward the goal of a comprehensive set of test data and use it as a gold copy.

Your first step is simple: save the data from the test environment you set up for the issue or feature you’re working on now. That is going to be your gold copy. If your application uses any kind of SQL database, you could use that to generate a DML-SQL script that you can add to a repository.

Use your gold copy to set up the test environment for your next issue. Make sure you don’t (inadvertently) change your gold copy while you’re working on that issue. When you’re finished, and if you needed to add test data for the test cases of this issue, update your gold copy.

Rinse and repeat, and soon enough you’ll be well on your way to a truly useful comprehensive set of test data.

Author: Marjan Venema

This post was written by Marjan Venema. Marjan's specialty is writing engaging copy that takes the terror out of tech: making complicated and complex topics easy to understand and consume. You’ll find samples on her portfolio. Her content is optimized for search engines, attracting more organic traffic for small businesses and independent professionals in IT and other Tech industries, that she’ll also help with content audits and strategy.

How Many Test Environments Do I Need? 

Having a set of test environments properly configured and managed is essential for modern software organizations. Creating and configuring such environments is part of a solid test environment management strategy. Unfortunately, as with many things in software development, this is easier said than done. There are many questions that need answering. For instance: how many test environments do I need?

 

How-Many-Environments

The short, correct, but also totally frustrating answer is—you’ve guessed it—it depends. Like most things in our industry, there isn’t a one-size-fits-all solution.

 

This post features a longer, (hopefully) not frustrating version of the answer above. Answering “it depends” without explaining which things it depends on makes for a useless answer, so we won’t do that. Instead, we’ll cover the factors you have to take into account when making the decision on how many environments your organization needs. The most obvious one is probably organization size, but, as you’ll see, it’s not the only one.

Let’s begin.

What Are Test Environments?

Before we get into the factors we’ve mentioned, we have some explaining to do. Or, rather, some defining. In this section, we’ll define test environments. You’ll learn what they are and why do you need them.

Of course, if you’re already experienced in managing test environments—or have enough familiarity with the term—feel free to skip to the next section with a clear conscience.

A testing environment is a setup of software, hardware, and data that allows your testing professionals to execute test cases. For the test environment to be effective, you have to configure it, so it closely resembles the production environment.

As we’ve already covered, there are many types of test environments. Which ones your organization will need depends on several factors, such as the test cases itself, the type of the software under test, and many more. Since that’s the main topic of this post, we’ll get there in a minute.

But first, let’s quickly cover some of the main types of test environments available.

How Many Test Environments Do I Need? The Bare Minimum

We’re about to cover the main factors for deciding which and how many environments your organization should adopt. Before we get there, though, let’s talk about the bare minimum number of environments you need.

Development

The first obvious and indispensable one is the development environment. For some of you, it might sound weird to think of the dev environment as a testing environment, but it is. Developers should constantly test the code they write, not only manually (via building the application and performing quick manual tests) but also automatically, through unit tests.

You might consider the development environment an exception in the sense that, unlike most other environments, it doesn’t need to mimic production too closely. For instance, I have seen people argue that developers that create desktop apps shouldn’t use the best machines available. Instead, they should adopt computers that are close in configuration to those their clients use, so they can feel how the software is going to run. That’s nonsense. Developers should use the better and fastest machines their companies can afford, so their work is done most effectively. If performance is an issue, there should be a performance testing phase (and environment) to handle that.  The same goes for other characteristics of the production environment that don’t make sense for developers.

CI (Integration)

What I’m calling here the “CI environment” could also be simply called the test environment, or even integration test environment.

Here is the first step in the CI pipeline after the developer commits their code and push it to the server. The CI server builds the application, running whatever additional steps are adequate, such as documentation generation, version number bumping, and so on. Just building the code is already a type of test. It might help detect dependency issues, eliminating the “but it works on my machine!” problem.

If the application is successfully built, unit/integration tests are executed. This step is vital since it might be slow for developers to run all of the existing tests often in their environments. Instead, they might run only a subset of tests on their environments, and the CI server will take care of running the whole suite after each check-in/push.

QA

Then we have what we’ll call the QA environment. Here is where end-to-end tests are run, manually, automatically, or both. End-to-end testing, also called functional tests, are the types of tests that exercise the whole application, from the UI to the database and back again. This type of testing checks whether the integration between different modules of the software work, as well as the integrations between the software and external concerns, such as the database, network, and the filesystem. As such, it’s a really essential type of testing for most types of software.

Production

Finally, we have the production environment. For many years “testing in production” was seen as the worst sin of testing. Not anymore. Testing is production is not only forgivable but desirable. Practices like canary releases are vital for companies that deploy several times a day since it allows them to achieve shorter release cycles while keeping the high quality of the application. A/B testing can also be seen as a form of testing in production, and it’s essential for organizations that need to learn about their users’ experience when using their software. Finally, some forms of testing, like load testing, would be useless if performed on any environment other than production.

Which and How Many Environments Do You Need? Here Are the Criteria You Should Use to Decide

Having covered the bare minimum environments most organizations need, it’s time to move on. Now we’ll cover the main factors you need to weigh when deciding your testing approach. Let’s go.

Organization Size

The size of the organization matters when deciding which environments it needs. One of the ways this matter is in regards to personnel. Since larger companies have more people, they can afford to have entire teams or even departments dedicated to designing, performing, and maintaining certain types of testing, which includes taking care of the required environment.

Companies of different sizes also have different testing needs due to the software they create. It’s likely that larger companies produce more complex software, which would demand a larger pipeline. The inverse is also likely true for smaller companies.

Finally, organization size often correlates with the stage in which the company finds itself. That’s what we’re covering next.

Organization’s Life Phase

Do you remember when Facebook’s motto was, “Move fast and break things?”  It’s been a few years since they changed it to “Move fast, with stable infra.” While the new motto is definitely not as catchy as the previous one—some might say it’s even boring—it makes sense, given where the company stands now.

Startups have different testing needs than most established companies. Their priorities aren’t the same since they’re at very different points in their lifecycles.

For startups, beating their competitors to market might be more valuable than releasing flawless products. Established companies, on the other hand, will probably place “stability in the long term” way higher in the scale. They have their reputation at the stakes. If they’re public, they have to generate results for shareholders.

Therefore, more established companies will usually employ a testing strategy that adopts more environment, and it’s probably more expensive, and definitely slower. But such a strategy might give them the reassurance they need. On the other hand, startups that value time to market might choose a more streamlined pipeline, with fewer environments. Such an approach might be cheaper, easier to build and manage, but will give fewer guarantees than the more heavy-weight approach of the enterprise.

Software Type

The type of software developed is a huge factor when it comes to testing. A database-based web application with a rich user interface will require UI and end-to-end testing, for instance, while a library will not.

Similarly, user-acceptance testing makes sense for applications targeted at final users. For libraries and frameworks, unit and integration tests might suffice. You might have even more specific needs, such as integration with custom hardware, which can require more environments.

The type of software will dictate the required types of testing, which, in turn, will help you decide on the environments.

Domain or Industry

Some industries are highly regulated, while others are less regulated or non-regulated at all. That also has a huge impact on an organization’s testing approach. Domains like financial services and healthcare come to mind.

Your company might need to adhere to rules, regulations, or norms that govern whatever industry it operates in. That might require you have an additional environment in order to test that the product is compliant with these rules.

Time for the Verdict

So, based on all that we’ve just seen. How does one choose which test environments their organization needs? We’ll now, as promised in the title, offer you a quick recipe, or a step-by-step guide.

  1. Start with the basics. Meaning, start with the bare minimum environments we’ve mentioned and then build upon it as your requirements change.
  2. Consider the organization’s size and stage in life. Take into account the values and priorities of the organization (time to market vs. stability, disruption vs. market share, etc.), available personnel, and budget.
  3. Take into account the type of software you make and the industry you belong to.

With that in mind, make your decision. If your organization makes a picture editing app for Android and iOs, you might want to have (besides the obvious dev and prod):

  • The CI environment to perform unit and integration tests.
  • A QA environment to help you with end-to-end/integration tests, using both emulation and real devices.
  • An acceptance testing environment, where stakeholders give the final sign-off for the app’s release.

But if you’re creating a banking application, you could add an additional security and compliance environment. (Keep in mind that this is just an example. I’m not well-acquainted with the financial domain.)

Final Considerations

Test environment management is vital for the modern software delivery process. One of the decisions a test environment manager needs to make is how many environments to use. As you’ve seen, there is no one-size-fits-all answer, but that’s no reason to despair. There are objective criteria you should use to help you with your decision.

The journey isn’t easy, but this blog has many articles that can help you master test environment management and take your organization’s testing approach to new levels.

Author

This post was written by Carlos Schults. Carlos is a .NET software developer with experience in both desktop and web development, and he’s now trying his hand at mobile. He has a passion for writing clean and concise code, and he’s interested in practices that help you improve app health, such as code review, automated testing, and continuous build.

Types-of-Test-Environments

Types of Testing Environments

Today, we’re talking about types of testing environments. But first, let’s establish some basic definitions.

Software testing is a process that verifies that the software works as expected in test environments. The verification is done through a set of automated or manual steps called test cases.

A test environment is a combination of hardware, software, data, and configuration that’s required to execute test cases. You have to be sure to configure the testing environments to mimic production scenarios.

Types of Test Environments

There are many types of test environments. Which ones you’ll need depends on the test cases and the application under test. A thick-client desktop application serves a different need than a web application does. As a result, the test environments required for a desktop application are different than those for a web application.

This post is a complete guide on types of testing environments and how often they’re used. The post also explains how testing environments fit into the pace of modern software development practices.

1. Integration Testing Environment

The first on our list of testing environment types is the integration testing environment. 

In this type of environment, you integrate the individual software modules and then verify the behavior of the integrated system. A set of integration tests are used to check that the system behaves as specified in the requirements document. In an integration testing environment, you can integrate one or more modules of your application and verify the functional correctness.

The environment setup depends on the type of application and the components being tested. Setting up this environment usually involves ensuring the availability of the right hardware, the right software version, and the right configuration. Integration testing environments should mimic production scenarios as closely as possible. This includes the configuration and management of application servers, web servers, databases, and all the infrastructure needs of the application.

With the modern DevOps approach to software development, where continuous testing is a norm, an integration testing environment will probably be used daily or multiple times a day. Therefore, the ability to recreate the environment at will is paramount to an effective software delivery process.

2. Performance Testing Environment

Next on our list is a performance testing environment. You use this environment to determine how well a system performs against performance goals. The performance goals in question can be concurrency, throughput, response time, and stability.

Performance testing is a very broad term and usually includes volume, load, stress, and breakpoint testing. A good performance testing environment plays a crucial role in benchmarking and identifying bottlenecks in the system.

The setup of a performance testing environment can be fairly complex. It requires the careful selection and configuration of the infrastructure. You’ll run your performance tests on multiple environments with a different configuration that varies by

  • Number of CPU cores,
  • Size of RAM,
  • Concurrent users,
  • Volume of data,

You’ll then document and publish the results as system benchmarks and compare this with the performance goals of the software.

After that, in a performance testing environment, the software teams take a closer look at the system behavior and related events such as scaling and alerting. From there, they’ll carefully tune them if needed.

Performance tests are usually time-consuming and expensive. Therefore, setting up performance testing environments and running these tests for every change can be counterproductive and is usually not recommended. That’s why software teams only run these performance tests on a per-requirement basis, which could be once a month, for every major release, or whenever there are significant changes in the application.

3. Security Testing Environment

Let’s now discuss security testing environments. When working with this type of environment, security teams try to ensure that the software doesn’t have security flaws and vulnerabilities in the areas of confidentiality, integrity, authentication, authorization, and non-repudiation.

Organizations usually engage a combination of internal and external (from a different organization) security experts who specialize in identifying security vulnerabilities in software. During this process, it’s crucial to establish a thorough scope that defines exactly which systems will be targeted, which methods will be used, and when the assessment will take place.

As part of a good security testing environment setup procedure, you’ll want to establish some ground rules, such as

  • Have an isolated test environment.
  • Have non-disclosure agreements in place.
  • Don’t leave the system in a worse state.
  • Don’t touch production data.

This is especially applicable when engaging external security companies.

Different parts of security tests can happen at different frequencies and different stages of the software delivery process. A successful software team usually executes vulnerability assessments, scans, audits, and any other non-invasive tests more frequently when compared to invasive tests like penetration tests. Automating security tests that are non-invasive and running them as often as possible, perhaps alongside integration tests, helps maintain a security baseline.

On the other hand, executing advanced invasive tests requires a good understanding of the software and the potential attack surfaces. Carrying out sophisticated attacks on the software by penetration testing requires the expertise of the security specialists. This is not something that you can easily automate, and it requires a lot of effort. Therefore, you’ll run these tests less frequently.

4. Chaos Testing Environment

According to the book Chaos Engineering, “Chaos engineering is the discipline of experimenting on a system to build confidence in the system’s capability to withstand turbulent conditions in production.”

Understanding how the failures of individual parts of the system can potentially cascade and ruin the whole system is the ultimate goal of chaos testing. By using fault injection techniques, software teams build an in-depth understanding of critical dependencies of their system and how software fails.

With that definition in mind, let’s talk about the final environment on our list: the chaos testing environment.

If you have a modern web application with a microservice architecture, where different independent services make up the application, then setting up a reliable chaos testing environment is crucial. These environments must be set up in the same way as your production environments are, and they must be configured for scale and high availability.

Having an environment to test the high-availability, disaster recovery, and business continuity provisions configured in each service crucial to improving the reliability of your whole system. It’s equally important to test how the dependent services behave in these failure modes. Disaster recovery drills or game days are excellent opportunities to run these tests and identify the potential weak links in modern, large-scale applications. Software teams usually run the chaos experiments less frequently and mostly alongside the performance tests.

Other Considerations

Finally, I’d like to close out with some other considerations you should take into account:

  • While there are other types of tests, such as usability testing, accessibility testing, and testing for internationalization and localization, these tests don’t need a separate testing environment. They can reuse the integration testing environment or any of the other setups.
  • The number of test environments you have to manage also depends on the number of platforms that the software needs to support and be compatible with. Factors such as supported operating systems, processor architectures, and different screen sizes all come into play.
  • There is, of course, no place like production, which in itself is the ultimate test environment for any application. Product teams engage in the responsible collection of user data in production. This helps product teams to collect telemetry data about how users engage with their applications. Consequently, they use practices like A/B testing and feature toggles to improve their chances of success.
  • The data used in different environments also needs to be realistic. Having tools to back up and simultaneously anonymize and hide personally identifiable data can be very useful in testing scenarios.

Managing Test Environments

Test environment management is a crucial aspect of the software delivery process. Incorrect environment setup leads to inconsistent test results. This leads to friction and blame among the stakeholders, who ultimately lose confidence in the test results.

This post described the commonly used test environments and things to consider when setting up and managing them. The ability to spin up testing environments on demand is crucial to successfully managing your test environments. You can read more on this topic in our post called “Are you TEM Savvy,” which is an excellent piece full of useful tips on managing reliable and consistent test environments.

 

Author

This post was written by Gurucharan Subramani. Gurucharan is a software engineer who likes to get .NET, Azure, and Azure DevOps to not just meet but to also dance. Some days, Guru is a dev; other days, he's ops. And he's frequently many things in between. He's a community advocate who leads the Bangalore Azure User Group and is a member of the .NET Foundation.

Which TDM Method is Best

Which Test Data Management Method Is Best?

Introduction

Setting up a great test data management strategy is a crucial step for taking your test automation process to its fullest potential. However, many software professionals are still not familiar with the concept of test data management (TDM). Even those that are familiar with TDM might have a hard time putting it in practice. Why is that?

 

Which TDM Method is Best

When it comes to test data management, the “what” is relatively straightforward, but we can’t say the same about the “how.” As it turns out, there are several competing methods of managing test data. Which one should you choose? As you’ll see in this post, this isn’t a one-approach-fits-all kind of situation. Each method has its unique strengths and weaknesses and might be more or less appropriate for your use case.

Today’s post will cover some of the existing test data management approaches, listing the advantages and disadvantages of each one. Let’s get started.

Replicating Data From Production

The first approach we’re going to cover in this post is perhaps the most popular one, at least for beginners. And that makes perfect sense if you think about it. When you first encounter the challenge of coming up with data to feed your testing processes, it isn’t too far-fetched to think you should just copy data from production and be done with it. It’s the easiest way to obtain data that is as realistic as possible. You just can’t get more real than production.

Not everything is a bed of roses when it comes to production data replication. Quite the opposite, actually. The easy access to data is pretty much the only advantage this method has. And what about the disadvantages? These, sadly, abound.

Here Be Dragons: Some Downsides of the Approach

Here’s the first problem: replicating data from production continues to be mostly a manual process. Sure, you can come up with scripts and automated jobs to do most of the heavy lifting for you. But keep in mind that generating the data isn’t the whole job of a TDM management solution. “Availability” is an integral part of the package. That means that the TDM tool is responsible for making sure the data is available where it’s needed, at the right time. A naive approach based on scripts might not be sufficient to manage the demands of a complex testing process, forcing you to rely on a manual process to do so.

Secondly, production replication doesn’t lend itself well to negative test cases. It’d be out of the scope of this post to give a lengthy explanation of negative testing. In a nutshell, negative test cases are tests that validate the system against invalid data. Basically, you throw faulty data at your application to check how well it can handle it. Since production data would (hopefully) be in good shape, this approach isn’t well suited to this type of testing.

Production data replication also doesn’t work…if there is not data replication for you to replicate in the first place! What should you do when you need to test an application that is still in the alpha stage of development or even a prototype? Since no one is actually using the application, there would be no production data for you to copy. That’s a severe downside of this approach since every new application will face this problem.

Here Be Dragons (For Real): Legal Implications

Finally, we have the most serious downside of this approach—data sensitivity. Data compliance is a crucial part of the modern IT landscape since companies are responsible for the data they store and manipulate. It’s up to them to protect their client’s data, ensuring it’s not abused. When replicating data from production, software organizations run the risk of failing to comply with privacy acts, such as GDPR. And that can bring catastrophic consequences, legal, financial, and reputation-wise.

Data Masking

In order to solve the downsides of production data replication (a.k.a the naive approach), test data management tools have come up with more sophisticated methods. One of the
most popular of these approaches is test data masking. As its name implies, tools that adopt this approach enable its users to apply masks to production data. Such masks will remove personally identifiable information (PII) from the data.

Data masking is an improvement over naive production data replication, for sure. But the approach is not without its downsides.

First, consider the “time” variable. Data masking doesn’t reduce the time spent generating (or rather, copying) the data for testing. On the contrary, it increases it because now you have a new added in the process. You could argue—and I’d gladly agree—that it’s time well spent, but it’s more time nonetheless.

Then, you also have to keep in mind that data masking isn’t a standalone approach on its own. Instead, it complements the previous approach by solving one of its more serious issues. The problem is data masking can’t fix every problem that the production replication approach has. For instance, if you intend to test an application still in development, for which there is no production data at all, data masking is powerless to help you.

Synthetic Data Generation

Synthetic data generation is yet another method of test data management. As its name suggests, this approach consists of generating “fake”—or synthetic—data from a data model. Tools that implement this approach are able to preserve the format of the data. The values themselves, though, are completely disconnected from any original data. What does that imply?

The implication of this is that synthetic data generation’s greatest asset is simultaneously its most significant downside. By populating the database with entirely “made-up” values, the approach dramatically reduces (virtually eliminates) the risk of exposing sensitive data. On the other hand, depending on the tool’s sophistication—or lack of—you might end up with data that feels “fake-y.” One of the goals of an excellent TDM strategy is to provide data that is as production-like as possible.

To wrap-up, let’s talk about the biggest advantage of synthetic data generation, namely: speed. Once you have a model in place, you can quickly generate data from it, effectively eliminating the time delays that plague other approaches.

Test Data Management Is More Than Test Data Generation

In this post, we’ve covered some of the most used approaches to generate test data. The list is definitely not exhaustive; there are many more methods that we didn’t cover. However, many of them are variations or combinations of the approaches we did talk about.

Another thing to keep in mind is that test data management is much more than just generating test data. TDM is responsible for ensuring the quality of the test data, its availability, and also its security. In other words: the data must be good, and it must be available at the right place, at the right time. And bad actors shouldn’t be allowed to expose it or misuse it in any way. That’s why, depending on the needs of your organization, you should consider adopting a full-fledged data compliance solution, which can not only supply your data generation needs but also make sure your data adhere to the compliance requirements you must follow.

Author Carlos Schults

This post was written by Carlos Schults. Carlos is a .NET software developer with experience in both desktop and web development, and he’s now trying his hand at mobile. He has a passion for writing clean and concise code, and he’s interested in practices that help you improve app health, such as code review, automated testing, and continuous build.

TEM-10-Essential-Best-Practices

Test Environment Management 10 Essential Practices

Introduction

A test environment is a setup for the testing team where they execute test cases. This environment comprises software, hardware, and network configuration. The setup of a test environment depends on the application under test. A complete setup helps testers carry out their tasks without any system side hurdles. Finally, the setup helps improve the quality of the final product.

 

TEM-10-Essential-Best-Practices

In this post, we’ll get to know why managing your test environment is important. After that, we’ll discuss 10 best practices for test environment management. By following these best practices, the testing team of your company can efficiently manage test data in a way that the data can be reused. The best practices will also enable your team to complete their task by following data privacy regulations and to ensure client satisfaction. So, let’s get started.

Importance of Test Environment Management

As technology evolves, requirements keep changing. For instance, with Angular dominating the UI domain, the demand for single-page applications has increased a lot. Cost, time, and quality are the most important factors to check for every business. Every firm aims for the appropriate budget and ample time before starting a project. But somehow, these two entities face the most shortage. Well, we don’t live in an ideal world, do we? Sometimes, due to time and budget constraints, the quality of the end product declines.

But budget and time shortages don’t mean that you should compromise on the testing phase. Software testing is a tricky process with the involvement of several dependencies.

Testing is a crucial activity of the software development life cycle (SDLC) and can determine a product’s fate. Therefore, the test environment has to be reliable. Do you want to disappoint customers with a product that has many critical bugs because of improper testing? No matter whether you’re a start-up or an established company, never overlook the importance of testing. For getting the highest accuracy in test results, your team needs proper test environment management.

If a team doesn’t give importance to test environment management, it results in poor handling of assets. This includes time and budget. When a company can’t handle these in the right way, quality suffers. Thus, to maintain a high quality of products and services offered, it’s essential to manage the test environment. Before getting on to the best practices, take a look at these metrics, which will help you to measure and improve your test environment.

10 Best Practices for Test Environment Management

Now that we know why managing a test environment is important, let’s get started with the 10 best practices for test environment management.

1. Begin Testing Exercise at an Early Stage in the SDLC

Even though most firms know the importance of testing early, very few successfully implement it. When teams don’t test early, it leads to bugs at a later stage. Fixing them requires more time, effort, and money. As a result, it disrupts the management of the test environment. When the development team has composed even a few lines of code, testing exercises should start. The team should also follow the shift-left approach. This involves performing testing earlier in the product’s life cycle. The process results in fewer bugs to fix in the end. Hence, it saves time and cuts down costs.

2. Demand Awareness and Management of Knowledge

When customers make a demand, a company must develop a product in a way that satisfies that demand. When team members keep client needs in mind during development, the outcomes are close to what the client expects. Thus, it’s important to use a test environment management strategy according to customer needs. Testers writing a test case should develop a knowledge base according to demands. The business analyst also needs to keep updated documents that contain the current as well as changed requirements. In this way, if there is a case of updating the test environment, it keeps other team members in line with what’s going on.

3. Conduct Iterative Tests

Most companies are adopting agile as part of their framework. Agile follows a sprint-based approach. It also involves testing in iterations. That means the entire product is divided into small phases. Each phase has its development and testing cycle. The entire process reveals bugs early, which makes fixing them easier. Iterative tests increase the flexibility of the SDLC. The client can change the scope in case the need arises without it being a burden to the budget. Since the team handles bugs at every sprint, there doesn’t end up being an overload of them at the end of the project. Thus, managing risks becomes easier.

4. Plan and Coordinate

Planning is very important while managing the test environment. Testing and development teams often don’t have separate test assets. So, test environment managers should plan schedules for both teams. They should ensure proper coordination to avoid conflicts. Sometimes, shared usage of resources can give rise to certain conflicts. For instance, if there are few iOS systems in your team to develop and test iOS apps, conflict may arise regarding which team will use the systems and when. Planning and coordination is a must to maintain transparency among teams and team members. Apart from that, proper communication with clients is important to keep them updated on their requirements. Check out this use case, which will help you to effectively plan and use your resources.

5. Reuse the Test Resources and Test Cases

Reusing test resources helps save money for a company. It frees up the firm of the need to tap new resources every time a new project begins. Even though every application is unique, many have some generic areas. That’s where the option of reusing test cases increases. Reusing test cases reduces redundancy. It eliminates the need for writing a different script each time you’re testing new features. For instance, all e-commerce stores have a shopping cart. Thus, testers can use the script for testing the “add to cart” feature of another app. It won’t matter if they’ve already used it before since the feature is the same.

6. Implement Standardization and Automation

It’s important for testers to analyze the validity of tests. But this requires a benchmark. Defining test environment standards makes it possible to set up a benchmark for running the test cases. After setting these standards, it’s time to automate. Some things that can use automation include deployment, build, and shakedown. Automation can save time, resources, and manual efforts that can be put to better use later. Configuration management becomes a lot easier when the dependency on manual testers lessens. Automated TEM tools reduce the number of test environments in a test bed. As a result, it improves test environment provisioning time. Besides this, the costs incurred are lower.

7. Use Testing Techniques According to Needs

I’m going to cite a situation that you must have come across many times. There are times when something seems impossible at first. But if you break it down into chunks, it doesn’t seem overwhelming. Taking it one step at a time makes things simple. In most cases, with this approach, you succeed. Similarly, for test environment management, first, analyze the test structure. Then break down massive loads of tasks into manageable pieces. After that, understand the steps and the needs for performing each. Figure out the test endeavors and take the necessary steps. According to the need, pick out the testing techniques and implement them. For example, you can use containers to improve your system’s security and agility.

8. Mask and Encrypt Test Data

With advancement in technology, cyberthreats have increased. Endpoint devices are usually the starting point of the majority of data breaches. Not only are they a threat to users, but they also pose great hazards to companies as well. So, companies should mask and encrypt user data. Not only that, every company should avoid using real customer data during testing. Firms should ensure data compliance with PII or GDPR standards. Some processes to ensure data compliance are ETL automation, service virtualization, and data fabrication.

9. Implement Processes According to Stakeholder Requirements and the Company’s Culture

Stakeholders are the most important component determining the success of a business. They’re the ones giving the requirements. The entire team has to work according to their needs. But it’s important that their needs are in line with the company’s culture. Sometimes companies don’t have the means to ensure the fulfillment of customer requirements. This results in an unsatisfied client, which can be fatal for a company. The testing team should have pre-configured assets before they start testing. A client doesn’t forgive any unresolved bug in the later stages. For instance, if an e-commerce app in production charges the customer twice for a transaction, it can create chaos. As a result, the reputation of the company can suffer. You can take a look at this blog to analyze and refine your company’s current capabilities.

10. Convey the Right Status of the Task

Legitimate and correct correspondence is a must to ensure a smooth flow of work. If the conveying of information goes wrong, it can cost a firm its reputation. The objective of a project should be clear to all in the beginning. Team members should share the task status with the right group of people. The timing of conveying information is also important for a fruitful task.

Suppose you need a specific set of data for executing a test case. Whenever you’re stuck with that test case, convey the blockage-related information with the concerned person. Don’t just inform your QA lead. Inform the scrum master or your QA manager as well. They’ll take care of the issue so that you can smoothly carry out your task. If you hesitate regarding whom to ask, a delay in testing will occur. Before the project starts, the entire team should have clarity about whom to contact in case of emergencies or sharing daily task statuses

What Drives Appropriate Test Environment Management?

The processes for end-to-end testing should be transparent for managing your test environment. The key factors driving smooth management include the following:

  1. Resource management: Use a resource properly and assign the right task to the right person.
  2. Efficient planning: Plan a successful test cycle at each sprint that results in a bug-free end product.
  3. Process optimization: Adjust the entire test process in a way that the resources give their best output.
  4. Test automation: Automate every repetitive task that seems to waste manual labor.

Software testing is tricky. To achieve high accuracy, setting up a test environment close to a real-life scenario is important. To set up such an environment, proper planning and management are musts. Scenarios change and test environments evolve. Thus, a test environment management strategy is vital for firms. A combination of the above practices increases productivity. At the same time, test environment management practices also reduce costs and accelerate releases

Author: Arnab Roy Chowdhury

This post was written by Arnab Roy Chowdhury. Arnab is a UI developer by profession and a blogging enthusiast. He has strong expertise in the latest UI/UX trends, project methodologies, testing, and scripting.

seven-metrics

7 Metrics for Configuration Management

Years ago, a company might have released a software suite and then proverbially kicked back in a chair with its feet on a desk basking in celebration.

Suffice it to say that the software world moves much faster today. It seems as though there are some companies that push out new updates every few days. And thanks to microservices architecture and the DevOps mindset, there are many companies that are constantly updating their software or at least some feature in it.

Pumping out release after release isn’t easy. With so many moving parts and so much riding on each new update, companies need to do everything within their power to ensure that releases are well-received by users.

That starts with getting their development house in order through a process known as configuration management.

seven-metrics

 

What Is Configuration Management?

Configuration management is the process in which organizations and development teams oversee new software updates to ensure they are working as designed when bugs are fixed, new features are introduced and old features are decommissioned.

Thanks to configuration management, organizations can gain full visibility into the development lifecycle and easily identify errors that may need to be fixed.

If you’re thinking about implementing configuration management at your organization, that’s great news. But like anything else, you can’t just expect configuration management to solve all of your problems on its own. You need the right approach.

With that in mind, let’s take a look at seven different configuration management metrics you can track to increase the chances that your initiatives help you achieve results. Keep track of these metrics and work hard to improve them over time, and you’ll build better applications that are better received by your users.

1. Frequency of Updates

Some companies are perfectly fine with shipping updates once a quarter or even once a year. Other companies pride themselves on pumping out new updates every month, and some might aim to release even more new software packages than that.

Every software company has unique goals. It might not matter how regularly your software is updated, but it might matter how consistently it is. Your users will expect at least some rhyme and reason to the number of updates you pump out.

Keeping track of the frequency of updates metric will help you make sure you are meeting your company’s goals and satisfying customer expectations. If you’re not shipping releases as frequently as you’d like, you might want to drill deeper and find out why.

2. Release Downtime Metrics

We all know how applications should work. When they don’t work as designed, we’re unable to get things done quickly. Depending on how bad the problem gets, it’s easy to get frustrated to the point a user starts thinking about whether they should find a substitute solution.

End users depend on your software. For a business user, that might mean a platform they use to store information and communicate with colleagues. It might mean a place they store code for a developer. And for a regular customer it might be a social network they use every day to meet new people.

Whatever the case may be, the moment you are unable to meet user expectations might be the moment your users begin an exodus.

Worse than that, downtime can be prohibitively expensive. In fact, a recent Gartner report found that downtime can cost as much as $540,000 per hour.

Keeping track of how much downtime you incur (if any) while a new update is released can help you maintain positive and productive user experiences. In the event there is downtime during a new release, you can quickly identify what happened and take steps to reduce the chances it happens again.

Add it all up, and keeping tabs of this metric can help you provide better experiences while increasing profitability.

3. Average Number of Errors

In a perfect world, your developers would write flawless code every day, and each new release would ship with perfect code. But we live in the real world where people do make mistakes.

Of course, it’s in your best interest to work as hard as you can to keep those mistakes down to a minimum. By keeping track of the average number of errors in each new software release, you can identify areas in your workflow that could be improved. This may help you catch mistakes earlier in the process.

For example, you might realize that a new adding a new tool to your DevOps team’s arsenal can help your release smoother updates every time.

At the very least, tracking this metric provides an easy mechanism to determine whether your team is trending in the right direction, i.e., making fewer errors as time goes on.

4. Code Lines Per Update

The point of writing is to convey a point to your readers. Unless the author is getting paid per-word, writers should state their case in as little words as possible. The question is what day is it today? It’s not do you have any idea as to which 24-hour period we are currently in the middle of?

In the world of software development, the same maxim holds true. You don’t need 100 lines of code when a single line will do the same trick.

Keeping track of code lines per update can help you ensure that you are writing software efficiently. Depending on what your team’s workflows are like, you may be able to identify individual developers who are writing too many lines of code and have the more efficient coders give them a few pointers.

5. Rework Metrics

How many files does your team rework each month?

Developers don’t come cheap. The last thing you want to do is pay them to do the same work over and over again—whether that’s because someone did it incorrectly in the first place or because your team is struggling to communicate effectively.

Tracking rework metrics can help you make sure that the percent of rework your team does each month doesn’t increase in perpetuity. On the flipside, you may also be able to identify what you are doing that is decreasing rework. With that information on hand, you may be able to bake additional efficiencies into your development processes.

6. Frequently Changing Files

Track this metric to determine whether certain files are changing too frequently. If you find out that certain files are changing with each update, you may need to look into the issue a bit.

For example, you can determine why certain files are changing so often. Maybe it’s because developers aren’t sure of the requirements. Maybe it’s because there’s an issue with your testing and QA approach.

Whatever the case may be, this metric can help you add additional efficiencies into your development processes by reducing or eliminating duplicative work and rewriting inefficient code blocks as needed.

7. Root Causes for Late Delivery

As you optimize your release management workflows, everything should get more and more predictable.

Yet nobody can predict the future and nobody’s perfect. So things will invariably not go according to plan every now and again.

Configuration management lets you drill down into the root causes for late delivery.

Fingers crossed that you never run into any errors that slow down your releases. But in the event you do miss some deadlines, you may be able to start detecting a pattern as to why you are unable to meet them.

Armed with that information, you can begin working backward to identify what is causing delays and what you need to do to prevent that from happening in the future.

Are You Ready to Start Using Configuration Management?

Is your development team reaching its full potential and doing its best work? If not, it may be time to get started with configuration management. That way, you’ll be able to delight customers by meeting their expectations while avoiding downtime and increasing profitability.

And the best part? With the right tools in place, configuration management can largely be automated.

To learn more about how your DevOps team can integrate configuration management into their workflows to build better software more efficiently, take a look at Enov8.

Author Justin Reynolds

This post was written by Justin Reynolds. Justin is a freelance writer who enjoys telling stories about how technology, science, and creativity can help workers be more productive. In his spare time, he likes seeing or playing live music, hiking, and traveling.

ITIL4-Whats-Changed

ITIL 4.0: What Has Changed?

It’s hard to imagine a world that existed without technology. Yet it wasn’t so long ago when things like computers and the internet were brand-new and seemingly futuristic concepts. As computing infrastructure became increasingly widespread in the 1980s, the government of the United Kingdom issued a set of recommended standards that IT teams should follow because it realized that, at the time, everyone was just doing their own thing.

Shortly thereafter, the first iteration of Information Technology Infrastructure Library (ITIL) emerged, called the Government Information Technology Infrastructure Management (GITIM). These guidelines outlined a set of practices, processes, and policies organizations could follow to ensure their IT infrastructure was set up in such a way to support their business needs. The ITIL standards were inspired by the process-based management teachings of productivity and management guru W. Edwards Deming.

ITIL4-Whats-Changed

Over the years, we’ve seen many iterations of ITIL. The most recent version of the standards, ITIL 4, was released in February 2019. In large part, this iteration was influenced by the agile approach to software development and the rise of DevOps teams. Both of which have  transformed the way we think about technology. 

Keep reading this post to learn more about:

  • What ITIL is?
  • The pros and cons of ITIL?
  • How ITIL has changed over time?
  • How, specifically, the rise of agile workflows and DevOps teams impacted ITIL 4?

What Is ITIL?

Life would be difficult if it were impossible to learn from other people and we had to figure everything out by ourselves. Good thing that’s not the case.

At a very basic level, ITIL is a framework that outlines the best practice for delivering IT services throughout the entire lifecycle. Organizations that follow this framework put themselves in a great position to stay on the cutting edge of technology and leverage the latest tools and philosophies that drive leading innovators forward today. They are also able to respond to incidents faster and enact change management initiatives with more success.

At a high level, there are five core components of ITIL 4:

  1. Service value chain.
  2. Practices.
  3. Guiding principles.
  4. Governance.
  5. Continual improvement.

Now that we’ve got our definitions locked down, let’s shift our attention to the pros and cons of enacting ITIL at your organization.

What Are the Pros of ITIL? 

ITIL is popular for good reason. The framework helps organizations big and small optimize their IT infrastructure. It also helps them secure their networks and realize productivity gains.

More specifically, ITIL enables organizations to:

  • Keep IT aligned with business needs, ensuring that the right infrastructure is in place for the task at hand. For example, a team that has a mobile workforce should leverage cloud platforms that enable employees to work productively from any connected device.
  • Delight customers and strengthen user experiences by improving the delivery of IT services and maintaining a network and infrastructure that works as designed and meets modern expectations.
  • Reduce IT costs and eliminate unnecessary expenditures by ensuring that IT infrastructure is optimized and efficient. For example, if you’re storing petabytes of duplicative data for no reason, best practices would tell you that you need to do a lot of culling to save on storage costs.
  • Gain more visibility into IT expenses and infrastructure to better understand your network and detect inefficiencies that can be improved. For example, if your software development team has recently started using containers to build applications, you might not need to run as many virtual machines anymore, which drain more computing resources.
  • Increase uptime and availability due to increased resiliency and robust disaster recovery and business continuity plans. This is a big deal because downtime can be prohibitively expensive, depending on the scale of your organization. Just ask Amazon.
  • Future-proof tech infrastructure to support agile workflows and adaptability in an era where customer needs shift overnight and competitors are always just a few taps of a smartphone away.

What Are the Cons of ITIL? 

But like everything else, ITIL by itself is not a panacea. You can’t just hire some consultant who will preach the virtues of ITIL and expect to transform your IT operations overnight. 

While the benefits of the framework speak for themselves, you need to be realistic about shifting to a new approach to IT management. However, with the right approach—which includes training, patience, and reasonable expectations—your organization stands to benefit significantly by adopting ITIL.

How Has ITIL Changed Over the Years?

ITIL initially emerged because more and more organizations were using new technologies but nobody really knew how to manage them effectively. Companies were largely using technology because they could—not because they were making strategic investments to support their customers and business needs. The initial iteration of ITIL found that most companies had the same requirements and needs for their IT networks, regardless of size or industry.

At the turn of the millennium, the second iteration of ITIL came online. In large part, this version consolidated and simplified the teachings and documentation from the inaugural ITIL framework.

In May 2007, ITIL 3 came to the surface. This third iteration included a set of five reference books called Service Strategy, Service Design, Service Transition, Service Operation, and Continual Service Improvement. ITIL 3 picked up where ITIL 2 left off, further consolidating the framework to make it easier for organizations to implement.

Four years later, ITIL 3 was revised once more, primarily to maintain consistency as technology evolved.

Introducing ITIL 4

Fast forward to 2019, and the most recent version, ITIL 4, is where we’re at today. Quite simply, ITIL 4 was issued to align the standards with the agile and DevOps workflows that have grown to dominate technology teams over the last several years. ITIL 4 includes two core components: the four dimensions model and the service value system. 

At a high level, ITIL 4 represents more of a change in approach and philosophy than a change in content. Just as software teams adopt agile and DevOps workflows, IT must adopt a similar mindset if they wish to keep pace and support accelerated innovation. At the end of the day, IT is a cornerstone of the success of the modern organization. It’s imperative that IT support the new way of working if an organization wishes to reach its full potential.

How Have Agile and DevOps Impacted ITIL 4?

In the past, software teams would build monolithic applications and release maybe once a year. Today’s leading software development teams have embraced agile development and DevOps workflows. Slowly but surely, monthly releases are becoming the norm. Development is becoming more collaborative, too, with both colleagues and users steering the product roadmap.

ITIL 4 recognizes and supports this new way of working with new core messages:

  • Focus on value.
  • Start where you are.
  • Progress iteratively with feedback.
  • Collaborate and promote visibility.
  • Think and work holistically.
  • Keep it simple and practical.
  • Optimize and automate.

Where Does Your Organization Stand?

If your company hasn’t yet implemented ITIL, what are you waiting for?

Whether you’re a startup or your organization has been around forever, ITIL serves as a guiding framework. Follow it and it enables you to protect your networks, support your developers, and delight your customers. 

And what exactly is the alternative, anyway? Running your IT department like the Wild West?

With so much on the line, you can’t afford that risk. So become an ITIL-driven organization. That way, you’ll get the peace of mind that comes with knowing your networks and infrastructure are secure and support innovation and agility. 

What’s not to like?

Author Justin Reynolds

This post was written by Justin Reynolds. Justin is a freelance writer who enjoys telling stories about how technology, science, and creativity can help workers be more productive. In his spare time, he likes seeing or playing live music, hiking, and traveling.

Software-Testing-Anti-Patterns

Software Testing Anti Patterns

Since the dawn of computers, we’ve always had to test software. Over the course of several decades, the discipline of software testing has seen many best practices and patterns. Unfortunately, there are also several anti patterns that are present in many companies.

An anti pattern is a pattern of activities that tries to solve a certain problem but is actually counter-productive. It either doesn’t solve the problem, makes it worse, or creates new problems. In this article, I’ll sum up some common testing anti patterns.

Software-Testing-Anti-Patterns

 

Only Involving Testers Afterwards


Many companies only involve the testers when the developers decide a feature is done. The requirements go to the developers, who change the code to implement the requested feature. The updated application is then “thrown over the wall” to the testers. They will then use the requirements to construct test cases. After going through the test cases, the testers will often find all sorts of issues so that the developers need to revisit the new features. This has a detrimental effect on productivity and morale.

Such an approach to testing is used in many companies, even those that talk about modern practices like Agile and DevOps. However, “throwing things over the wall” without input from the next step goes against the spirit of Agile and DevOps. The idea is to have all disciplines work together towards a common goal.

Testing is about getting feedback, regardless of whether it is automated testing or not. So of course you have to test after the feature has been developed. But that doesn’t mean you can’t involve your QA team earlier in the process.

Having testers involved in defining requirements, identifying use cases, and writing tests is a way to catch edge cases early and leads to quality tests.

Not Automating When You Can


Tests that run by the click of a button are a huge time saver, and as such they also save money. Any sufficiently large application can have hundreds or even thousands of automated tests. You can’t achieve efficient software delivery if you’re testing all this manually. It would simply take too much time.

One alternative I’ve seen is to stop testing finished features. But due to the nature of software, existing features that used to work can easily break because of a change to another feature. That’s why it pays off to keep verifying that what used to work still works now.

The better alternative to manual testing is to automate as many tests as you can. There are many tools to help you automate your tests. From the low level of separate pieces of code (unit tests) over the integration of these pieces (integration tests) to full-blown end-to-end tests.

As a tester, you should encourage the whole team to be involved in manual testing. It will encourage them to write code that is fit for automated tests. Help developers write and maintain automated tests. Help them identify test cases.

Expecting to Automate Everything


As a counterargument to my previous point, be wary of trying to automate every aspect of testing. Manual testing can still have its place in a world where everything is increasingly automated.

Some things could be too hard or too much work to automate. Other scenarios may be so rare that it isn’t worth automating, especially if the consequences of an issue are acceptable.

Another thing you can’t expect to automate is exploratory testing. Exploratory testing is where testers use their experience and creativity to test the application. This allows the testers to learn about the application and generate new tests from this process. Indeed, in the words of software engineering professor Cem Kaner, the idea behind exploratory testing is that “test-related learning, test design, test execution, and test result interpretation [are] mutually supportive activities that run in parallel throughout the project.”

Lack of Test Environment Management


Test Environment Management spans a broad range of activities. The idea is to provide and maintain a stable environment that can be used for testing.

Typically, we call such an environment a testing or staging environment. It’s the environment where testers or product owners can test the application and any new features that the developers have delivered.

However, if such an environment isn’t managed well, it can lead to a very inefficient software delivery process. Examples are:

●  Confusion over which features have already been deployed to the test environment.

●  The test environment is missing certain critical pieces or external integrations so that not everything can be tested.

●  The hardware differs significantly from the production environment.

●  The test environment isn’t configured correctly.

●  Lack of quality data to test with.

Such factors can lead to a back and forth discussion between testers, management, and developers. Bugs may go unnoticed or reported bugs may not be bugs at all. Use cases may be hard to test and bugs reported in production hard to reproduce.

Without a good test environment management, you will be wasting time and losing money.

Unsecured Test Data


Most applications need a set of data to test certain scenarios. Not all data is created equal though. With modern privacy laws, you want to avoid using real user data. Both developers and testers often have to dig in the data of the test environment to see what is causing certain behavior. This means reading what could be personally identifying information (PII). If this is data from real users, you might be violating certain laws.

Moreover, if your software integrates with other systems, the data may flow away from your system to a point where it is out of your control. Maybe even to another company. This is not something you want to do with real people’s data. Security breaches can lead to severe public image and financial losses or fines.

So you want either made up data or obfuscated / secured data. But you also want to make sure that the data is still relevant and valid in the context of your application. One possible solution to this is to generate the data your tests need as part of your tests.

    

Not Teaching Developers


The whole team owns the quality of the software. Pair with developers and teach them the techniques so that they can test the features as they finish them.

This is especially important in teams that (aspire to) have a high level of agility. If you want to continuously deploy small features, the team will have to continuously test the application. This includes developers, instead of having them wait for the testers.

In such a case, the role of testers becomes more of a coaching role.

If testers and developers don’t work together closely, both will have negative feelings for each other. Developers will see the testers as a factor blocking them from moving fast. Testers will have little faith in the capacity of the developers to deliver quality software.

In fact, both are right. If the two groups don’t collaborate, precious time and effort will be lost in testing a feature, fixing bugs, and testing the feature again. If the developers know what will be tested, they can anticipate the different test cases and write the code accordingly. They might even automate the test cases, which is a win for testers and developers.

Streamline Your Testing!


The major theme in this article is one of collaboration. Testers and developers (and other disciplines) should work together so that the software can be tested with the least amount of effort. This leads to a more efficient testing process, fewer bugs, and a faster delivery cycle. Top that off with good test environment management (which is also a collaborative effort) and secure data, and you have a winning testing process.

Author Peter Morlion

This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.

Configuration-Management-Heart-of-ITIL

Why Configuration Management Is at the Heart of ITIL

For many organizations, IT starts small and grows. They don’t plan out how their IT organization will interact with the rest of the business. Instead, they hire a person or two to handle a few computers and maybe set up a server. Over time, those roles grow alongside the business. Eventually, IT leadership recognizes that the business needs more out of the IT organization than what they’re providing. Sometimes it’s because customers aren’t able to get the hardware or software they need.

Whatever the cause, many organizations come to realize that their IT organization just isn’t cutting it.

In lots of instances, those organizations choose to use ITIL, the IT information library.

Configuration-Management-Heart-of-ITIL

What’s ITIL?

In the 1980's, the British government established the IT information library. In the decades since, they’ve updated it repeatedly. It defines a series of best practices that aid IT organizations in delivering high-quality IT solutions to their business. ITIL is actually a very big set of guidelines. The original library was more than thirty books! Even though it’s changed many times throughout the years, ITIL still has a core focus on some key principles. What’s top among those principles is the idea that IT organizations should focus on providing value, work iteratively, and start from where they are.

This means that organizations shouldn’t have to drastically re-organize the way they do business to adopt ITIL best practices. Instead, they should look at how they’re providing value already. They should then identify ways they can provide more value to the business, and implement those changes over time, a little bit at a time. Short, achievable goals can mean that the business entities who rely on the IT organization see constant improvement, instead of waiting for big, difficult projects that may or may not deliver.

A common early step in this process is to implement a configuration management database.

What’s Configuration Management?

Configuration management is the process of storing information about the IT resources within your organization in a centralized repository. Usually, this takes the form of a relational database. As the name implies, you also store information about the configuration of the system inside that database.

Starting your configuration management project can feel a bit like you’re starting in the deep end. Even in businesses with less than 100 employees, it’s likely you have a lot of IT resources. To do configuration management right, you need to find every one of those resources! That said, you should plan to treat creating a configuration management database like any other project. Plan how you’ll undertake asset discovery. Evaluate options for the configuration management database software. Define a realistic picture of success. Then, put that plan into action, and execute it to completion.

How Does Configuration Management Highlight Value of Your Assets?

As noted, configuration management takes all the information about your business’s IT assets and brings them to one place. This is a benefit. When you have information that’s spread out in multiple silos, it’s difficult to find it what you need. If a critical system needs to be replaced, it’s a lot easier to fix it when you know how it’s supposed to be configured. A breakdown in a critical system is much easier to fix when you know how that system works.

Configuration management projects bring additional benefits to IT organizations. It’s common for IT leadership to discover assets they didn’t know existed during the asset discovery phase of a configuration management project. Usually, these assets were quietly doing their jobs, but they were unsupported by IT. IT organizations discover these business-critical assets haven’t received updates in years. This is a serious security risk. Identifying those assets and establishing a proper support plan is one way configuration management is a great side effect of configuration management projects.

How Does Configuration Management Optimize the Value of Your Assets?

Another way that configuration management provides value is by optimizing your IT assets. Once you know where all your IT assets are and how they’re performing, you can standardize on optimal configurations for all of your assets. Configuration management means you know which laptops perform the best for which employees. It’s easy to spot which servers have non-standard configurations when all your configurations are in a single place. Your IT organization can provide value by helping your users get the most out of their systems by standardizing on high-performance configurations.

Finally, IT organizations can minimize the amount of time they spend keeping systems up to date. With standard configurations on all systems, activities, like applying patches, are a one-step process. That means your business is more secure while your IT organization spends less time updating systems.

How Can We Make Configuration Management Successful?

Even though the goal of configuration management is to store all of the information about your IT assets in one place, the project doesn’t need to be monolithic. You can approach it piecemeal. Instead of trying to gather information about every asset across the whole company, focus on one division at a time. Work with the employees there to identify IT assets and how those assets are configured. Once you’ve done this, train those employees to work with the configuration management system. This means that when they need to change configurations or add new systems, they’ll know how to work with your IT team.

This kind of iterative approach pays off in more ways than one. Not only will you break the project into manageable chunks, but you’ll also learn along the way. It’s guaranteed that you’re going to do some things wrong at first. Instead of doing all those things wrong across the whole organization, you can limit your mistakes to just a few employees. Those employees will be able to provide feedback to your team, and that feedback will mean that your project will do better. This ties into one of the core principles of ITIL, which is being iterative in your processes. You should learn from each step of your implementation to make the next one better.

Another way to be successful is to pick high-quality software. When you centralize your configurations, you want to use software that’s simple and straightforward. Choosing a quality implementation platform like Enov8 will save your team hundreds of hours and make the software easier to use for your business.

How Is Configuration Management the Heart of ITIL?

Good configuration management plays directly into the values that are at the heart of ITIL. It not only provides value to the business, but it makes life easier for IT employees too. You can approach configuration management as an iterative process, implementing it one step at a time. That might start with a basic database that tracks laptops and servers, and wind up with a system that tracks items all the way down to the component level. The heart of ITIL is that your team makes those choices.

ITIL isn’t a monolith. The goal isn’t to say that every organization should implement each part of the system. And at no point should you expect that everyone will implement each system in the same way. You should optimize the implementation for what your business needs. Your first step will always be sitting down with stakeholders in your business and determining what will work best for your team and theirs. That’s the heart of ITIL, and good configuration management is one step on the way to making your IT organization better for everyone.

Author Eric Boersma

This post was written by Eric Boersma. Eric is a software developer and development manager who's done everything from IT security in pharmaceuticals to writing intelligence software for the US government to building international development teams for non-profits. He loves to talk about the things he's learned along the way, and he enjoys listening to and learning from others as well.