Reasons-Enterprise-Configuration-Management-is-Failing

Reasons Enterprise Configuration Management Is Failing

Reasons Enterprise Configuration Management Is Failing

Enterprise configuration management (ECM) is a big topic. Projects that aim to implement ECM are, by their nature, significant endeavors. You’re trying to encapsulate all of your organization’s core configurations down into a single source. That’s difficult under the best of circumstances, and like most projects, ECM projects may fail quickly. When they do fail, they tend to do so spectacularly too. As a project or people manager, you’re responsible for big parts of your ECM project. You want to make sure that it’s going to be a success. How can you tell whether your ECM project is in danger of failing?

We’ve collected five major “issues” that will tell you whether your project is on the wrong track.

Issue #1: Eroding Trust

ECM projects, as we’ve noted, are massive projects. By necessity, they mean working with people outside of your own team. Different teams within an organization often have different overall goals and varying incentives to meet those goals. Sometimes those goals are contrary to the success of a big organization-wide project like an ECM project. In order for a project like this to succeed, that means teams and individual team members sometimes need to set aside their own goals.

In order to set aside their goals and work together, the teams involved need to trust each other, and that’s true for ECM projects too. They have to be able to trust that the work they’re doing together will be more beneficial to the company in the long run than if they were to prioritize their own team goals.

If you identify that there are trust issues between your teams or members of those teams, that’s a big red flag for your project. It means that those teams, when stressed, do what’s in their best interest, not in the interest of the whole organization.

Issue #2: Losing Sight of the Bigger Picture

Losing sight of the bigger picture is a facet of Issue #1, eroding trust. But it’s not exactly the same thing. While people lose trust in other teams when they lose sight of the bigger picture, losing context doesn’t always mean losing trust.

Sometimes losing sight of the bigger picture can result in people who get too focused on one detail of the project. That can manifest in lots of ways. Maybe they’re overly focused on one detail in a part of your implementation. It could be that they’re hyper-critical of a decision that’s been made while ignoring the bigger implications. Whatever the reason, they’re a challenge to work with because they’re too focused on small details. Since ECM stretches across your organization, you need people who see and promote Enterprise wide IT intelligence.

One or two of those kinds of people in your project aren’t going to kill it. But if you’ve got half a dozen, you’re going to have a hard time. Identifying those people and working out their problems early on is important to your implementation’s success.

Issue #3: Things Stop Improving

Any long-term project needs continuous improvement to be successful. If you and your teams already knew how to do all of this, your project would be done already. Stagnation is the enemy of a good ECM implementation. Even if you’ve already delivered your initial ECM project, you need to constantly search for ways to improve what you’re delivering to the business.

Realistically, no matter how good your ECM implementation is, it’s not perfect. There are always ways that you can improve. Whether that’s as a team or in terms of your technology, the state of the thing is constantly advancing. You need to be able to advance with it. Continuous improvement not only improves your efficiency, but it boosts employee morale too. High employee morale and a good sense of the cutting edge makes it easy to recruit talented new employees. It also becomes much easier to retain the good employees you have.

The converse is also true. If you’re stagnating, you’re going to lose your best employees. It’ll be harder to attract top talent because you’re working with outdated processes and technology. That sort of thing leads to a snowball effect: it’s harder to attract top talent, which means it’s harder to tackle bigger challenges, and the cycle repeats.

Issue #4: Users Don’t Use ECM

This issue is a little tougher to detect. By definition, if your users are working around your ECM system, they’re trying to make sure you don’t know about it. That’s not good! If you find out that people are working around your ECM system to store essential configuration in some other way, that’s a warning sign that there’s something wrong with your implementation. If you find someone who’s doing that, it should be a red flag that you need to find a way to improve your systems.

There’s good news, though. If you find someone who’s working around your system, you know just who to ask to make things better. Instead of getting upset that someone is working around your system, your new knowledge presents you with an opportunity. Take the time to sit down with them and ask why it is that the system isn’t working for them. You might not be able to alleviate every issue they have, but you can help make things better. If one person is working around your system, they’re probably not alone.

Because an ECM is supposed to be a central repository of configuration, anyone working around the system is a small failure. The key for you as a dedicated employee is to figure out just how pervasive those workarounds are. You might find, after a bit of investigation, that the employee in question is just lazy—that’s not a red flag. But if you do find that they’re working around your system for good reasons, your ECM implementation is in trouble. Use that as an opportunity to get out there and improve your team and your process.

Issue #5: Lack of Management Buy-in

Even if you avoid every other red flag on this list, a lack of management buy-in will debilitate any project. This issue can also be the root cause of many of the other red flags on this list too. If managers aren’t bought in, they’ll task their employees with different priorities than what your project needs to succeed. They might erode the time your team needs to think critically about what you’re doing and improve your processes. Another manager who isn’t bought-in will fight over tiny details in an attempt to derail the project. A petty manager will run interference for users who are working around systems instead of taking the time to do the right thing.

As the person responsible for the project, it’s your responsibility to make sure managers understand the what and why of ECM. Old wisdom says that a house divided cannot stand; the same is true in business. Your ECM project is going to have a much tougher time if you have to fight other managers within your organization. If you find yourself fighting other managers, you should be worried about the state of your project. That’s the time to start asking those people if there are ways you can prove the usefulness of the project to them. If you can, that’s a great way to get your project back on track.

To Conclude

None of these issues are fatal for your project. If you find one of them cropping up around your implementation, don’t panic. When you find yourself with a red flag, that’s the time to think critically about the choices you’ve made that have brought you to that point. Evaluate whether there are things you can change and spend time talking to your users and the leads of other teams. They’ll help you determine the root of the causes of your problems, and knowing what to fix is half the battle.

Author

This post was written by Eric Boersma. Eric is a software developer and development manager who’s done everything from IT security in pharmaceuticals to writing intelligence software for the US government to building international development teams for non-profits. He loves to talk about the things he’s learned along the way, and he enjoys listening to and learning from others as well.

Test Environment Management Tools Compared

Five years ago, if you were asked to recommend a “Test Environment Management” platforms you might have struggled.  In fact, you might have struggle to identify one, particularly if you would have considered your own DevTest teams’ behaviour. Lot of disruption, delays, misconfiguration and the inevitable use of Spreadsheets for tracking project bookings, MS Visio document for system information capture, Email for Reporting and perhaps if you were lucky, some test automation for platform health checks. Not exactly elegant nor scalable but undoubtedly better than complete chaos.

However, things have somewhat changed and with a raft of solutions now claiming to solve this problem, The Last Frontier of the SDLC, the question now is not “what” but “which” platform will meet our needs and address one of the SDLC’s biggest “Waste Areas”?

At TEM Dot we decided to compare six of the biggest players in this space across 10 key areas:

Key TEM Vendors

Key TEM Performance Areas

  1. Modelling
  2. Booking Management
  3. Coordination
  4. Ticketing
  5. Health Monitoring
  6. Automation & DevOps
  7. Data Management
  8. Reporting
  9. Extensibility
  10. Affordability

Test Environment Management Tool Scoring

Area-1 Environment Modelling

The ability to know what your Environments and Systems look like.

Historically think Visio or your CMDB (if you have one).

Gold Medal Position:   

Enov8 & ServiceNow both offer powerful Visual CMDBs & Component / discovery mapping.

Silver:

Plutora & Xebia offer modelling capability.

Bronze:

Apwide & Omnium modelling is achieved via tabular forms.

Area-2 Booking & Contention Management

The ability to capture environment requirements & manage contention on Environments & Systems.

Historically think Email & an attached Word document.

Gold Medal Position:   

Enov8 & Plutora offer advanced booking & contention analysis methods.

Silver:

Apwide, ServiceNow offer booking requests (ref ticketing) capability.

Bronze:

Xebia has no obvious environment booking or contention mechanism.

Area-3 Environment Coordination

Tracking Events & Release activity across space (Environments) & time (Month, Year etc).

Historically think a MS Project Plans.

Gold Medal Position:   

Apwide, Enov8, Plutora, ServiceNow offer Environment & Release based calendaring.

Note: Enov8 & Plutora offer Runsheets /Implementation Plans (respectively).

Service Now offers checklists.

Silver:

Xebia – Calendaring is release centric (opposed to environment centric).

Bronze:

Omnium (limited capability identified).

Area-4 Ticketing

Ticketing / IT Service Management to capture Environment Change Requests Incidents etc.

Historically think Remedy.

Gold Medal Position:   

ServiceNow has advanced ITSM methods.

Silver:

Apwide (using Jira), Enov8, Plutora have solid Ticketing / Requests functionality.

Bronze:

Omnium & Xebia dependent on other tools.

Area-5 Health Monitoring

The ability check Systems or Components or Interfaces are up.

Historically think Test Automation scripting or your server monitoring solutions like Zabbix.

Gold Medal Position:   

Enov8 & ServiceNow offer integration methods & native agents to monitor health.

Silver

Apwide & Plutora have APIs that logically allow system health updates.

Bronze:

Omnium & Xebia don’t play in this space.

Area-6 Automation & DevOps

The ability to automate key Environment Operations using code.

Think Jenkins or Puppet Jobs.

Gold Medal Position:   

Xebia is a powerful release orchestrator (its primary purpose).

Silver:

ServiceNow Orchestration automates IT & Business Processes.

Enov8 offers “agnostic” Scripting Hub (Orchestration Manager), Pipelines, Playbook, Webhooks & URL Triggers.

Bronze:

Apwide integration is very simple but can be achieved with Get/Post methods.

Plutora needs other tools to automate/integrate properly (like Dell Boomi). The SaaS only option can also be limiting.

Omnium integrates with other tools to automate.

Area-7 Data Management

The ability to manage one’s data e.g. Extract Data, Masking data, Provisioning Data etc.

Think Compuware File-Aid.

Gold Medal Position:   

Enov8 seems to be the only solution for Test Data. Enov8 offers support for Data (PII/Risk) Profiling & Masking and Data Bookings through “Data Compliance Suite”. Enov8’s Visual Orchestrate can also be used to schedule other Data Tools.

Silver:

Xebia & ServiceNow capabilities are limited but they can leverage their orchestrators and call other tools.

Bronze:

Apwide, Omnium & Plutora don’t appear to play in this space.

Area-8 Reporting

The ability to get & share insights about your Environments.

Historically think drawing pretty pictures & graphs with PowerPoint.

Gold Medal Position:   

A lot of the tools have solid reporting; however, focus is Environments: No Gold Medal yet.

Silver:

Enov8 seem to have best out-of-box Environment dashboards. Needs simpler customization.

ServiceNow Env Dashboard are limited but ultimately extensible.

Xebia have some solid report, but more deployment focused.

Plutora is reliant on a new “Tableau” extension. Getting there but seems disjoint.

Bronze:

Apwide leverages Jira’s native capabilities.

Omnium approach is somewhat “download/export” focused.

Area-9 Extensibility

The ability to have the product do whatever you want.

Think of Salesforce or SAP.

Gold Medal Position:   

ServiceNow – An Extensible Engine. You can use it to build anything.

Enov8 – An “Object Oriented” Extensible Engine. You can use it to build anything.

Silver:

Plutora has broad customization features so you can “partially” alter its behaviour.

Bronze:

Xebia allows customization of your processes but not the platform itself.

With Apwide & Omnium you basically get what you get.

Area-10 Cost

The money ball question. And potentially the most important for some.

Gold Medal Position:   

Low Cost of Entry – Apwide, Enov8 (Free Team Edition) & Omnium

Silver:

Medium – Plutora & Xebia

Bronze:

Expensive – ServiceNow (Just add another “0” for licensing & tailoring services)

The “Test Environment Management Tool” Score Card 

Test Environment Management Tools Comparison

Overall Test Environment Management Platform Rating

Final TEM Tool Positions

Position Player Findings
#1 Enov8 Very much a Test Environment centric solution.
#2 ServiceNow An extensible ITSM solution, expensive but powerful.
#3 Plutora More focused on Release Planning.
#4 Apwide Simple & Elegant TEM/Release tool that has its place at the table.
#4 Xebia More focused on Continuous Delivery
#5 Omnium Inexpensive and will be the right fit for some.

Note: Scoring was limited to the ten key areas recognised by TEMDOT as the most important for successful Test Environment management. The scores do not reflect broader functionality i.e. functionality that may be deemed more important for your organization. If you feel there are inaccurate statements in this comparison or a tool missing, please reach out using our contact form.

DataOps Explained

Preamble

Companies—especially large internet companies—treat collections of data as an asset. And more and more companies are developing an appetite to leverage their data to compete. There are also increasing customer expectations for the fast release of high-quality products or services.

So how do you balance speed and quality? DataOps is your answer. Let’s take a look at what DataOps is and why it matters.

What Is DataOps?

The term DataOps is an abbreviation of the words data operations.

The speed of development and product release has decreased in the last 10 years due to technologies such as DevOps (development operations). As a result, we have a new problem: data and more data. To help draw insight from loads of raw data, companies use data analytics. Of course, there are various types, such as data mining, that help identify trends, patterns, and relationships in large data sets. Unfortunately, in our need-it-now economy, users of data analytics can’t—or won’t—wait for weeks or months to receive new analytics.

With the increased complexity of the emerging data ecosystem and the need to deliver insights more quickly, a new strategy is essential if we’re to gain value from massive amounts of data.

This is where DataOps comes in. It helps improve the delivery speed and robustness of analytics. In other words, DataOps is an automated, process-oriented methodology that helps analytics and data teams improve the quality of data analytics, as well as reduce its cycle time. To achieve this, DataOps combines agile development, DevOps, and statistical process control.

Similar to how DevOps brought together development and operations teams to handle software delivery problems, DataOps seeks to bring together data practitioners to deliver quality data for applications and business processes.

But do we really need another methodology?

Why DataOps Matters

In our current on-demand economy, a company has to rely on data from various sources to better understand their products, customers, and markets. This all sounds good until you factor in the dynamic nature of data. How do you effectively monitor the flow of a company’s data that includes prediction changes, business anomalies, trend changes, and more?

Someone could argue that we already have analytics to handle all of the data issues. But here’s the problem: Data analytics pipelines are in a deplorable state because of

  • Inadequate automation and orchestration
  • Minimal code and data reuse
  • Or a lack of coordination between the involved parties, such as IT, operations, and even business stakeholders.

In the end, we have poor-quality data that’s delivered too late to meet a business’s needs.

As more and more data is collected, the data pipelines become more complex. At the same time, large, more traditional enterprises realize the need to use all the data their company generates. Such information is becoming important even in everyday decisions.

Needless to say, all of these factors make it necessary for an organization to implement a new approach to govern the flow of data through its life cycle.

And here’s one more reason to consider using DataOps. Companies that have already implemented DevOps practices will find that implementing DataOps gives them a higher competitive edge. This is because the DevOps engineering framework may be regarded as preparation for DataOps. Organizations that rely on data need a similar high-quality and consistent framework that’s useful for fast data analysis.

Implementing DataOps in 7 Steps

DataOps is still a rising approach for data-driven organizations. DataKitchen, a company that developed a DataOps platform for data-driven enterprises, suggests seven steps for implementation. And the good news is you don’t have to discard your existing analytics tools.

Here are the seven steps to implementing DataOps.

Add Data and Logic Tests

This step requires that every time you make changes to an analytics pipeline, you have to add a test for the change. Testing applies to data, models, and logic. The idea is to make sure nothing will be broken in the analytics pipeline. These incremental, automated tests ensure that quality and integrity are built into the final output.

Use a Version Control System

In order for raw data to produce useful information, it goes through many processing steps. And all of these steps involve coding. In a similar manner to other software projects, the source files that data analysts use in the data pipeline require maintenance in a version control system such as Git. The aim of version control is to help keep track of changes and revisions. Keeping the code in a repository is also important, as it helps when there is a need for disaster recovery.

Branch and Merge

To maintain coding changes, data analytics should borrow the approach that software developers use to maintain their projects, which is to continuously update code source files. For instance, when a developer wishes to make changes, they pull out the relevant code from the repository. Changes are then made on the local copy (also called a branch) pulled from the repository. Once new changes are made and tested, the local copy (branch) is merged back into the repository.

Use Multiple Environments

Data analytics team members should have their own environment to work from. These environments will allow team members to work on subsets of data while isolating the rest of the organization from any effects of the ongoing maintenance or additions to the existing data.

Reuse and Containerize

Breaking down a data analytics pipeline into smaller components facilitates code reuse and containerization. By doing this, the data analytics team can move quickly as they leverage existing libraries or other code whenever they want to extend or develop new code.

Parameterize Your Processing

Borrowing the idea of parameters from software development will help in designing a robust data pipeline. And a flexible data-analytics pipeline will accommodate varying run-time circumstances.

Use Simple Storage

Simple storage helps make the whole data analytics pipeline readily available, and it eases the updating process.

What About Data Security?

There’s a lot of concern about how to gain insights from raw data in a robust yet fast manner. But we shouldn’t forget the consequences of data breaches across the globe. The costs you may incur for mishandling personally identifiable data is becoming too expensive. As you work toward building more and delivering faster, it’s important to consider the security of the data you handle.

When implementing DataOps, you must protect the data at every stage of its journey. Always keep in mind the bad guys who are ready to grab your data. And don’t forget the issue of accidentally sharing sensitive data that may cause you to fail to meet regulatory compliance.

Thankfully, there are solutions that help take these worries away, such as Data HotSpot—a product specifically designed for those in test data management and those who consume test data. With Data HotSpot, you are assured complete security, customer protection, brand protection, and penalty avoidance. That means you can implement DataOps and stay way ahead of your competitors with real-time or near real-time analytics.

Unlock the Value of Data

Today, there’s a need to avail data in real-time or near real-time because businesses rely on it to retain a competitive edge. As a result, it became necessary to create analytics methods that can quickly provide data for consumption by users or applications.

DataOps is a multidisciplinary approach that helps data analytics teams overcome the challenges of inflexible and poor-quality data. If an organization can implement DataOps properly, they will experience great improvements in producing robust and adaptive analytics.

As we’ve seen, DataOps matters today because it helps organizations create reliable and readily available data flows. And availability plays an important role in unlocking the value of an organization’s data.

Author: Alice Njenga

This post was written by Alice Njenga. Alice’s areas of expertise include technology, artificial intelligence, IoT, cloud computing, security, and telecommunication. She especially enjoys converting dense technical material to articles that are easy for the layman to understand.