ITIL4-Whats-Changed

ITIL 4.0: What Has Changed?

It’s hard to imagine a world that existed without technology. Yet it wasn’t so long ago when things like computers and the internet were brand-new and seemingly futuristic concepts. As computing infrastructure became increasingly widespread in the 1980s, the government of the United Kingdom issued a set of recommended standards that IT teams should follow because it realized that, at the time, everyone was just doing their own thing.

Shortly thereafter, the first iteration of Information Technology Infrastructure Library (ITIL) emerged, called the Government Information Technology Infrastructure Management (GITIM). These guidelines outlined a set of practices, processes, and policies organizations could follow to ensure their IT infrastructure was set up in such a way to support their business needs. The ITIL standards were inspired by the process-based management teachings of productivity and management guru W. Edwards Deming.

Over the years, we’ve seen many iterations of ITIL. The most recent version of the standards, ITIL 4, was released in February 2019. In large part, this iteration was influenced by the agile approach to software development and the rise of DevOps teams. Both of which have  transformed the way we think about technology. 

Keep reading this post to learn more about:

  • What ITIL is?
  • The pros and cons of ITIL?
  • How ITIL has changed over time?
  • How, specifically, the rise of agile workflows and DevOps teams impacted ITIL 4?

What Is ITIL?

Life would be difficult if it were impossible to learn from other people and we had to figure everything out by ourselves. Good thing that’s not the case.

At a very basic level, ITIL is a framework that outlines the best practice for delivering IT services throughout the entire lifecycle. Organizations that follow this framework put themselves in a great position to stay on the cutting edge of technology and leverage the latest tools and philosophies that drive leading innovators forward today. They are also able to respond to incidents faster and enact change management initiatives with more success.

At a high level, there are five core components of ITIL 4:

  1. Service value chain.
  2. Practices.
  3. Guiding principles.
  4. Governance.
  5. Continual improvement.

Now that we’ve got our definitions locked down, let’s shift our attention to the pros and cons of enacting ITIL at your organization.

What Are the Pros of ITIL? 

ITIL is popular for good reason. The framework helps organizations big and small optimize their IT infrastructure. It also helps them secure their networks and realize productivity gains.

More specifically, ITIL enables organizations to:

  • Keep IT aligned with business needs, ensuring that the right infrastructure is in place for the task at hand. For example, a team that has a mobile workforce should leverage cloud platforms that enable employees to work productively from any connected device.
  • Delight customers and strengthen user experiences by improving the delivery of IT services and maintaining a network and infrastructure that works as designed and meets modern expectations.
  • Reduce IT costs and eliminate unnecessary expenditures by ensuring that IT infrastructure is optimized and efficient. For example, if you’re storing petabytes of duplicative data for no reason, best practices would tell you that you need to do a lot of culling to save on storage costs.
  • Gain more visibility into IT expenses and infrastructure to better understand your network and detect inefficiencies that can be improved. For example, if your software development team has recently started using containers to build applications, you might not need to run as many virtual machines anymore, which drain more computing resources.
  • Increase uptime and availability due to increased resiliency and robust disaster recovery and business continuity plans. This is a big deal because downtime can be prohibitively expensive, depending on the scale of your organization. Just ask Amazon.
  • Future-proof tech infrastructure to support agile workflows and adaptability in an era where customer needs shift overnight and competitors are always just a few taps of a smartphone away.

What Are the Cons of ITIL? 

But like everything else, ITIL by itself is not a panacea. You can’t just hire some consultant who will preach the virtues of ITIL and expect to transform your IT operations overnight. 

While the benefits of the framework speak for themselves, you need to be realistic about shifting to a new approach to IT management. However, with the right approach—which includes training, patience, and reasonable expectations—your organization stands to benefit significantly by adopting ITIL.

How Has ITIL Changed Over the Years?

ITIL initially emerged because more and more organizations were using new technologies but nobody really knew how to manage them effectively. Companies were largely using technology because they could—not because they were making strategic investments to support their customers and business needs. The initial iteration of ITIL found that most companies had the same requirements and needs for their IT networks, regardless of size or industry.

At the turn of the millennium, the second iteration of ITIL came online. In large part, this version consolidated and simplified the teachings and documentation from the inaugural ITIL framework.

In May 2007, ITIL 3 came to the surface. This third iteration included a set of five reference books called Service Strategy, Service Design, Service Transition, Service Operation, and Continual Service Improvement. ITIL 3 picked up where ITIL 2 left off, further consolidating the framework to make it easier for organizations to implement.

Four years later, ITIL 3 was revised once more, primarily to maintain consistency as technology evolved.

Introducing ITIL 4

Fast forward to 2019, and the most recent version, ITIL 4, is where we’re at today. Quite simply, ITIL 4 was issued to align the standards with the agile and DevOps workflows that have grown to dominate technology teams over the last several years. ITIL 4 includes two core components: the four dimensions model and the service value system. 

At a high level, ITIL 4 represents more of a change in approach and philosophy than a change in content. Just as software teams adopt agile and DevOps workflows, IT must adopt a similar mindset if they wish to keep pace and support accelerated innovation. At the end of the day, IT is a cornerstone of the success of the modern organization. It’s imperative that IT support the new way of working if an organization wishes to reach its full potential.

How Have Agile and DevOps Impacted ITIL 4?

In the past, software teams would build monolithic applications and release maybe once a year. Today’s leading software development teams have embraced agile development and DevOps workflows. Slowly but surely, monthly releases are becoming the norm. Development is becoming more collaborative, too, with both colleagues and users steering the product roadmap.

ITIL 4 recognizes and supports this new way of working with new core messages:

  • Focus on value.
  • Start where you are.
  • Progress iteratively with feedback.
  • Collaborate and promote visibility.
  • Think and work holistically.
  • Keep it simple and practical.
  • Optimize and automate.

Where Does Your Organization Stand?

If your company hasn’t yet implemented ITIL, what are you waiting for?

Whether you’re a startup or your organization has been around forever, ITIL serves as a guiding framework. Follow it and it enables you to protect your networks, support your developers, and delight your customers. 

And what exactly is the alternative, anyway? Running your IT department like the Wild West?

With so much on the line, you can’t afford that risk. So become an ITIL-driven organization. That way, you’ll get the peace of mind that comes with knowing your networks and infrastructure are secure and support innovation and agility. 

What’s not to like?

Author Justin Reynolds

This post was written by Justin Reynolds. Justin is a freelance writer who enjoys telling stories about how technology, science, and creativity can help workers be more productive. In his spare time, he likes seeing or playing live music, hiking, and traveling.

Software-Testing-Anti-Patterns

Software Testing Anti Patterns

Since the dawn of computers, we’ve always had to test software. Over the course of several decades, the discipline of software testing has seen many best practices and patterns. Unfortunately, there are also several anti patterns that are present in many companies.

An anti pattern is a pattern of activities that tries to solve a certain problem but is actually counter-productive. It either doesn’t solve the problem, makes it worse, or creates new problems. In this article, I’ll sum up some common testing anti patterns.

Only Involving Testers Afterwards

Many companies only involve the testers when the developers decide a feature is done. The requirements go to the developers, who change the code to implement the requested feature. The updated application is then “thrown over the wall” to the testers. They will then use the requirements to construct test cases. After going through the test cases, the testers will often find all sorts of issues so that the developers need to revisit the new features. This has a detrimental effect on productivity and morale.

Such an approach to testing is used in many companies, even those that talk about modern practices like Agile and DevOps. However, “throwing things over the wall” without input from the next step goes against the spirit of Agile and DevOps. The idea is to have all disciplines work together towards a common goal.

Testing is about getting feedback, regardless of whether it is automated testing or not. So of course you have to test after the feature has been developed. But that doesn’t mean you can’t involve your QA team earlier in the process.

Having testers involved in defining requirements, identifying use cases, and writing tests is a way to catch edge cases early and leads to quality tests.

Not Automating When You Can

Tests that run by the click of a button are a huge time saver, and as such they also save money. Any sufficiently large application can have hundreds or even thousands of automated tests. You can’t achieve efficient software delivery if you’re testing all this manually. It would simply take too much time.

One alternative I’ve seen is to stop testing finished features. But due to the nature of software, existing features that used to work can easily break because of a change to another feature. That’s why it pays off to keep verifying that what used to work still works now.

The better alternative to manual testing is to automate as many tests as you can. There are many tools to help you automate your tests. From the low level of separate pieces of code (unit tests) over the integration of these pieces (integration tests) to full-blown end-to-end tests.

As a tester, you should encourage the whole team to be involved in manual testing. It will encourage them to write code that is fit for automated tests. Help developers write and maintain automated tests. Help them identify test cases.

Expecting to Automate Everything

As a counterargument to my previous point, be wary of trying to automate every aspect of testing. Manual testing can still have its place in a world where everything is increasingly automated.

Some things could be too hard or too much work to automate. Other scenarios may be so rare that it isn’t worth automating, especially if the consequences of an issue are acceptable.

Another thing you can’t expect to automate is exploratory testing. Exploratory testing is where testers use their experience and creativity to test the application. This allows the testers to learn about the application and generate new tests from this process. Indeed, in the words of software engineering professor Cem Kaner, the idea behind exploratory testing is that “test-related learning, test design, test execution, and test result interpretation [are] mutually supportive activities that run in parallel throughout the project.”

Lack of Test Environment Management

Test Environment Management spans a broad range of activities. The idea is to provide and maintain a stable environment that can be used for testing.

Typically, we call such an environment a testing or staging environment. It’s the environment where testers or product owners can test the application and any new features that the developers have delivered.

However, if such an environment isn’t managed well, it can lead to a very inefficient software delivery process. Examples are:

●  Confusion over which features have already been deployed to the test environment.

●  The test environment is missing certain critical pieces or external integrations so that not everything can be tested.

●  The hardware differs significantly from the production environment.

●  The test environment isn’t configured correctly.

●  Lack of quality data to test with.

Such factors can lead to a back and forth discussion between testers, management, and developers. Bugs may go unnoticed or reported bugs may not be bugs at all. Use cases may be hard to test and bugs reported in production hard to reproduce.

Without a good test environment management, you will be wasting time and losing money.

Unsecured Test Data

Most applications need a set of data to test certain scenarios. Not all data is created equal though. With modern privacy laws, you want to avoid using real user data. Both developers and testers often have to dig in the data of the test environment to see what is causing certain behavior. This means reading what could be personally identifying information (PII). If this is data from real users, you might be violating certain laws.

Moreover, if your software integrates with other systems, the data may flow away from your system to a point where it is out of your control. Maybe even to another company. This is not something you want to do with real people’s data. Security breaches can lead to severe public image and financial losses or fines.

So you want either made up data or obfuscated / secured data. But you also want to make sure that the data is still relevant and valid in the context of your application. One possible solution to this is to generate the data your tests need as part of your tests.

Not Teaching Developers

The whole team owns the quality of the software. Pair with developers and teach them the techniques so that they can test the features as they finish them.

This is especially important in teams that (aspire to) have a high level of agility. If you want to continuously deploy small features, the team will have to continuously test the application. This includes developers, instead of having them wait for the testers.

In such a case, the role of testers becomes more of a coaching role.

If testers and developers don’t work together closely, both will have negative feelings for each other. Developers will see the testers as a factor blocking them from moving fast. Testers will have little faith in the capacity of the developers to deliver quality software.

In fact, both are right. If the two groups don’t collaborate, precious time and effort will be lost in testing a feature, fixing bugs, and testing the feature again. If the developers know what will be tested, they can anticipate the different test cases and write the code accordingly. They might even automate the test cases, which is a win for testers and developers.

Streamline Your Testing!

The major theme in this article is one of collaboration. Testers and developers (and other disciplines) should work together so that the software can be tested with the least amount of effort. This leads to a more efficient testing process, fewer bugs, and a faster delivery cycle. Top that off with good test environment management (which is also a collaborative effort) and secure data, and you have a winning testing process.

Author Peter Morlion

This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.

Configuration-Management-Heart-of-ITIL

Why Configuration Management Is at the Heart of ITIL

For many organizations, IT starts small and grows. They don’t plan out how their IT organization will interact with the rest of the business. Instead, they hire a person or two to handle a few computers and maybe set up a server. Over time, those roles grow alongside the business. Eventually, IT leadership recognizes that the business needs more out of the IT organization than what they’re providing. Sometimes it’s because customers aren’t able to get the hardware or software they need.

Whatever the cause, many organizations come to realize that their IT organization just isn’t cutting it.

In lots of instances, those organizations choose to use ITIL, the IT information library.

What’s ITIL?

In the 1980’s, the British government established the IT information library. In the decades since, they’ve updated it repeatedly. It defines a series of best practices that aid IT organizations in delivering high-quality IT solutions to their business. ITIL is actually a very big set of guidelines. The original library was more than thirty books! Even though it’s changed many times throughout the years, ITIL still has a core focus on some key principles. What’s top among those principles is the idea that IT organizations should focus on providing value, work iteratively, and start from where they are.

This means that organizations shouldn’t have to drastically re-organize the way they do business to adopt ITIL best practices. Instead, they should look at how they’re providing value already. They should then identify ways they can provide more value to the business, and implement those changes over time, a little bit at a time. Short, achievable goals can mean that the business entities who rely on the IT organization see constant improvement, instead of waiting for big, difficult projects that may or may not deliver.

A common early step in this process is to implement a configuration management database.

What’s Configuration Management?

Configuration management is the process of storing information about the IT resources within your organization in a centralized repository. Usually, this takes the form of a relational database. As the name implies, you also store information about the configuration of the system inside that database.

Starting your configuration management project can feel a bit like you’re starting in the deep end. Even in businesses with less than 100 employees, it’s likely you have a lot of IT resources. To do configuration management right, you need to find every one of those resources! That said, you should plan to treat creating a configuration management database like any other project. Plan how you’ll undertake asset discovery. Evaluate options for the configuration management database software. Define a realistic picture of success. Then, put that plan into action, and execute it to completion.

How Does Configuration Management Highlight Value of Your Assets?

As noted, configuration management takes all the information about your business’s IT assets and brings them to one place. This is a benefit. When you have information that’s spread out in multiple silos, it’s difficult to find it what you need. If a critical system needs to be replaced, it’s a lot easier to fix it when you know how it’s supposed to be configured. A breakdown in a critical system is much easier to fix when you know how that system works.

Configuration management projects bring additional benefits to IT organizations. It’s common for IT leadership to discover assets they didn’t know existed during the asset discovery phase of a configuration management project. Usually, these assets were quietly doing their jobs, but they were unsupported by IT. IT organizations discover these business-critical assets haven’t received updates in years. This is a serious security risk. Identifying those assets and establishing a proper support plan is one way configuration management is a great side effect of configuration management projects.

How Does Configuration Management Optimize the Value of Your Assets?

Another way that configuration management provides value is by optimizing your IT assets. Once you know where all your IT assets are and how they’re performing, you can standardize on optimal configurations for all of your assets. Configuration management means you know which laptops perform the best for which employees. It’s easy to spot which servers have non-standard configurations when all your configurations are in a single place. Your IT organization can provide value by helping your users get the most out of their systems by standardizing on high-performance configurations.

Finally, IT organizations can minimize the amount of time they spend keeping systems up to date. With standard configurations on all systems, activities, like applying patches, are a one-step process. That means your business is more secure while your IT organization spends less time updating systems.

How Can We Make Configuration Management Successful?

Even though the goal of configuration management is to store all of the information about your IT assets in one place, the project doesn’t need to be monolithic. You can approach it piecemeal. Instead of trying to gather information about every asset across the whole company, focus on one division at a time. Work with the employees there to identify IT assets and how those assets are configured. Once you’ve done this, train those employees to work with the configuration management system. This means that when they need to change configurations or add new systems, they’ll know how to work with your IT team.

This kind of iterative approach pays off in more ways than one. Not only will you break the project into manageable chunks, but you’ll also learn along the way. It’s guaranteed that you’re going to do some things wrong at first. Instead of doing all those things wrong across the whole organization, you can limit your mistakes to just a few employees. Those employees will be able to provide feedback to your team, and that feedback will mean that your project will do better. This ties into one of the core principles of ITIL, which is being iterative in your processes. You should learn from each step of your implementation to make the next one better.

Another way to be successful is to pick high-quality software. When you centralize your configurations, you want to use software that’s simple and straightforward. Choosing a quality implementation platform like Enov8 will save your team hundreds of hours and make the software easier to use for your business.

How Is Configuration Management the Heart of ITIL?

Good configuration management plays directly into the values that are at the heart of ITIL. It not only provides value to the business, but it makes life easier for IT employees too. You can approach configuration management as an iterative process, implementing it one step at a time. That might start with a basic database that tracks laptops and servers, and wind up with a system that tracks items all the way down to the component level. The heart of ITIL is that your team makes those choices.

ITIL isn’t a monolith. The goal isn’t to say that every organization should implement each part of the system. And at no point should you expect that everyone will implement each system in the same way. You should optimize the implementation for what your business needs. Your first step will always be sitting down with stakeholders in your business and determining what will work best for your team and theirs. That’s the heart of ITIL, and good configuration management is one step on the way to making your IT organization better for everyone.

Author Eric Boersma

This post was written by Eric Boersma. Eric is a software developer and development manager who’s done everything from IT security in pharmaceuticals to writing intelligence software for the US government to building international development teams for non-profits. He loves to talk about the things he’s learned along the way, and he enjoys listening to and learning from others as well.