Data Environments Evil Twin

Data! Environments Evil Twin – Top Breaches 2017

Preamble

Data is without doubt the evil twin when it comes to managing your Environments. It is just so complicated both internally, within an individual database and at a more holistic level i.e. where data and the inherent relationships span the organization.

A complexity that ultimately exposes the organization to all kinds of challenges, one of the main ones being the likelihood unwanted information (for example Customer Personally Identifiable Information) will appear (or leak into) the wrong places for example your Development, Integration and Test Environments* or worse still onto the public internet. A sub-optimal situation when one considers 70% (Source Gartner) of breaches are committed internally.

Data Environments Evil Twin

Tip*: Don’t ignore Non-Production, as that’s where projects spend 95% of their time.

Anyhow here's a post from TJ Simmons on some of the top breaches from last year.

Top 5 Data Breaches of 2017

Back in September 2017, news broke about a group called OurMine hacking Vevo. OurMine got away with 3.12TB of Vevo's data and posted it online. That's when I came across this comment on Gizmodo:

Gizmo Data Hacking

That comment says it all. Every other day, it seems like another company is on the receiving end of a data breach. Most security experts will tell you that the data breaches we know of so far are merely the tip of the iceberg. One expert noted in 2015 that we should expect larger hacks to keep happening. And larger hacks have kept happening, leading to bigger and messier data breaches.

The following is a compilation of the worst data breaches of 2017. Usually, companies discover data breaches long after the actual breach impact. Therefore, I will organize these breaches in chronological order based on when the data breach was made known, not based on the date the breach actually happened. Just like how I selected the top IT outages of 2017, I selected the following five breaches based on the impact to the customers and to the affected business' reputations.

  1. Xbox 360 ISO and PSP ISO (Reported: Feb. 1, 2017 | Breach Happened: ~2015)

The forum websites Xbox 360 ISO and PSP ISO host illegal video game download files. They also house sensitive user information such as email IDs and hashed passwords. According to HaveIBeenPwned, a website that helps users check if their personal data has been compromised, Xbox 360 ISO and PSP ISO had a combined 2.5 million compromised user accounts. The attacks happened way back in September 2015 and no one discovered them until February 2017. The compromised information consisted of stolen email IDs, IP addresses of the users, and salted MD5 password hashes.

The biggest takeaway for consumers: avoid shady websites like Xbox 360 ISO. Trusting websites that host illegal material with your personal information is dangerous. On the plus side, at least both websites hashed their users' passwords. If your website is holding onto users' passwords, please implement the most basic of security measures by hashing those passwords.

  1. Deloitte (Reported: September 2017 | Breach Happened: March 2017)

In the same year that Gartner named Deloitte "number one in security consulting for the fifth consecutive year," Deloitte faced an embarrassing data breach. Deloitte, a multinational professional services firm, saw its clients' confidential emails exposed when a hacker gained unauthorized access to its company email system. The affected clients included blue-chip companies, well-known firms, and US government departments. What caused the hack? Apparently, the main administrator account required a single password and Deloitte had not instituted two-step verification for that account.

The Deloitte hack is a great example of how even the most security-conscious firms can make security missteps. We need to learn from Deloitte by identifying and eliminating all possible loopholes in our own companies' IT setups, the sooner the better.

  1. Equifax (Reported: September 2017 | Breach Happened: July 2017)

If you asked a regular Joe if they can recall any major data breach from 2017, chances are they will cite this one. Equifax, one of the top three US credit agencies, suffered a breach affecting nearly 143 million consumers. Given the sensitivity of the stolen data and the number of people affected, this breach has been considered "one of the worst data breaches ever."  The stolen data included Social Security numbers, driver’s license numbers, full names, addresses, dates of birth, and credit card numbers.

In response to the breach, Equifax set up a website called equifaxsecurity2017.com to which it directed consumers who wanted to know if their data had been stolen. Some users reported that the website did not work. And many were angry to find out that in order to use the website, they would have to agree to an arbitration clause stating that they would waive their rights to a class-action lawsuit. Some users even tried entering fake names and fake Social Security numbers, and the website's response—informing them they "may be affected by the breach"—increased skepticism about the website's validity.

The bigger your organization, the more information you have. If the information you have is sensitive, you will become a bigger target for data breaches. Equifax holds a great volume of highly sensitive information. This should lead to a corresponding increase in security measures, but clearly there is a gap between what should be done and reality. Learn from Equifax and be honest when assessing your existing security measures. Is there a gap between where they are and where they ought to be?

  1. Yahoo! Update (Reported in 2016 | Updated on October 9, 2017 | Breach happened: ~2013)

Yahoo! had already reported this breach back in December 2016, revealing that "one billion user accounts were compromised" in a 2013 hack. Turns out they underestimated the impact of the breach in the original report. Then-CEO Marissa Meyer subsequently told Congress that the 2013 data breach had affected all three billion of their user accounts. In other words, every single Yahoo! user account from popular services such as email, Tumblr, Fantasy, and Flickr suffered from the breach. After extensive investigations, the culprits' identities are still unknown.

Yahoo! is a classic case of a company with so many interdependent services that the complexity gives hackers opportunities to exploit. Notice how Yahoo! is still unable to identify the culprit? That speaks volumes to the challenges facing companies with a wide range of systems. You cannot plug the loopholes that you don't know exist. In other words, you need to start by knowing exactly how your internal IT systems work together.

  1. Uber (Self-Reported: November 21, 2017 | Breach happened: ~2016)

On November 21, Uber CEO Dara Khosrowshahi published a post revealing that he had become aware of a late-2016 incident in which "two individuals outside the company had inappropriately accessed user data stored on a third-party cloud-based service" that Uber uses. This breach affected 57 million users and further damaged Uber's already faltering brand. The stolen information included names, email addresses, and phone numbers.  Approximately 600,000 of the 57 million affected users were drivers who had their license numbers accessed by hackers.

What made things worse was that Uber paid the hackers 100,000 dollars to destroy the stolen data. As a result, Uber got even more bad press. Uber is an ambitious startup chasing high growth rates that emphasizes scale and agility above everything else. But in hindsight, they clearly needed to emphasize security as well. With consumers becoming more data-aware, security-conscious companies can gain an edge on their competitors.

Conclusion: Manage Data Security? Get Clarity on Your Company's Systems First

Given the complexity of enterprise IT systems, hackers can now find more loopholes to get past your company's security measures. Therefore, having a clear big picture on how your systems work together is a security priority. We've seen how even the biggest and most security-conscious firms (remember Deloitte?) can fall prey to data breaches, precisely because their complexity makes it much harder for them to identify and prevent hacks. With that in mind, consider an enterprise-level dashboard that can show you that big picture vision that will help both your security and your productivity.

As Peter Drucker once said about executive effectiveness, "you can't manage what you can't measure." Most organizations will have some way to measure security readiness. But do they have a way to measure and make visible how their systems work together? Knowing how your systems work together makes you better prepared to identifying root causes for potential hacks. If you can plug the gaps before they can be exploited, you can reach zero outage and zero breaches.

Author TJ Simmons

TJ Simmons started his own developer firm five years ago, building solutions for professionals in telecoms and the finance industry who were overwhelmed by too many Excel spreadsheets. He’s now proficient with the automation of document generation and data extraction from varied sources.

Smelly Test Environments

Smelly Environments

Kent beck, the inventor of XP extreme programming, once wrote about the concept of “Smelly Code”.
Given IT Environments seem to be a significant challenge for so many organizations,

It got us thinking:
What might “Smelly IT Environments” smell of?

In a nutshell, here is our top 8 list of things that indicate your Test Environments smell like a Kipper.

  1. Use of excel spreadsheets to manage & track your Test Environments
  2. A CMDB that only ever focuses on the Hardware Components
  3. Too many Test Environment System Instances or Components (causing overspend)
  4. Too few Test Environment System Instances or Components (causing contention)
  5. Inability to identify current and future Test environment demand or usage (no test environment bookings)
  6. Lack of End to End Test Data Integrity (causing testing to fail and grind to a halt)
  7. Inconsistent & Slow IT Environment operations (heroics and manual intervention)
  8. Manual information gathering & reporting (typically the use of PowerPoint)

Smelly Test Environments

Smelly Test Environments


Ref: Edited from Original Source smelly-environments
Full Scaled versus Scaled Down

The Art of Scaled down Performance Tests Environment

The Challenge

A challenge always faced in organizations is the decision to fund the cost of a production sized test environment or look for more cost effective alternatives.

A decision that can be somewhat “heated” between Quality Practitioners and  Corporate “Realists”.

In way of “distilling” the argument, I thought I’d summarize the upsides and downsides of “Fully Scaled” versus “Scaled Down”.

Full Scaled versus Scaled Down

Full v Scaled Table

Scale

Full Sized Test Environment

Scaled Down Test Environment

Pro

Resembles production

Faster to setup

Pro

Allows for production loads

Much cheaper

Pro

Same Code (similar config)

Same Code (different config)

Pro

Production like insights

Some insights

Con

Cost Prohibitive.

Unable to exercise and/or detect all issues.

Con

Takes to long to provision.

Test Data volumes have to compromise also. 

Con

Even a full sized environment wont be 100% production like. To many subtle differences like network componentery.

Assumes that application and its components (vertical & horizontal) scale “relatively” linearly. This assumption is often very wrong, resulting in skewed results.

Best Practice Tips

Despite the two alternatives, organization will typically gravitate to the latter due to budget constraints. With this in mind, here are 4 best practices (tips) that you might apply to reduce your risks and ensure positive & qualatative outcomes when building a “Performance Test Environment”.

  • Acknowledge the difference – And ensure risk is understood by all decision makers.
  • Keep the test environment consistent – Consistency will allow you to baseline & trend.
  • Performance Model – Bridge the gap & supplement your results with modelling & extrapolation. Understand application behaviour differences as you scale vertically or horizontally.
  • Leverage Cloud – Use a public cloud to build a temporary Performance Test Environment
Scaled v Non Scaled

Summary

It is dangerous to recommend a “scaled down environment” as it is always a compromise and “when things go wrong” they often go wrong in “fantastic” style. However, the reality is most organizations can’t afford to spend millions of dollars to support an application or project. As such scaling down may be the only feasible option and if that is the case then take into consideration the tips specified here.

Independent of your choice, remember that no matter how “production” like your Test Environments are, there are always differences and you should be prepared to continually understand these deltas and refine accordingly. Ultimately it is as much an Art as it is a Science.

About the Author

Jane Temov (author) is a Senior Environment Architect at Enov8. She has over 15 years of experience working across the IT industry in Europe & APAC and specializes in Technical Testing, Test Environment Management, Test Data and Release Operations.

WQR Test Environments

World Quality Report and Test Environment Management 2018

A review

TEM (DOT) COM is pleased to announce the 9th Edition of the World Quality Report from Capgemini, SOGETI and MicroFocus. This 2018 report continues to represent and promote the world of Test Environments Management with a whole chapter dedicated to “Test Data Management and Test Environment Management” information.

This year’s brief captures the response from over 1600 executives across 32 countries.

Here are TEMDOTCOMS favourite 5 statistics

  • 46% of respondents identified “a lack of appropriate test environments and data” to be biggest challenge facing Agile Development
  • 41% of respondents identified “lack of facilities to book and manage test environments”
  • 48% of respondents identified “an inability to manage excess Test Environments demand”
  • 47% of respondents identified “a lack of visibility of test environment availability”
  • 38% of respondents identified “defects due to environment misconfiguration “

For more details on other stats across Test Environments & Test Data, or across other Quality & DevOps topics then download a complete copy of the report here. Get your copy here: https://www.capgemini.com/service/world-quality-report-2017-18/

Our Summary of WQR findings

While organizations continue to invest significantly in testing & QA, it is still apparent based on this year’s findings and previous years (please refer 2016, 2017) that most organizations continue to struggle with the more complex aspect of Test Operations (Test Environments & Data Management).

On the upside however, the release of the report does indicate an ongoing elevation of awareness which will hopefully result in a change to behaviour, WQR trends (in future editions) and ultimately investment in tooling and enterprise outcomes.

WQR Test Environments
CICD Test Environment Management

Top 20 Continuous Integration Tools in 2017

A key consideration when building Test Environment Management solutions is to both understand and promote the DevOps philosophy of Continuous Integration and Continuous Delivery (CICD).
A philosophy or practice that promotes the rapid merging of developer artifacts (which could cross Application Software, the Data and the Infrastructure) and ensuring continual and automated build, testing, packaging and deployment).
Key Benefits include:
+ Early identification of defects.
+ Developer/Engineering Change is managed continually (opposed to a high risk big bang approach)
+ A constantly available development & test environment with the current build
+ Promotion of automated testing i.e. Unit & Shakedown (reducing manual intervention)
+ Immediate feedback on issues that have been introduced
+ Promotion of consistent & simplified packaging
+ Promotion of promotional (stateless) deployment methods
+ Accelerated Time to Market:
In today’s vast software market there is a plethora of CICD & TEM tools to consider and choose from. Some of which do one specific task (e.g. system deployment e.g. Atlassian Bamboo) and others which help govern the complete IT life-cycle across the enterprise (promoting SAFe or “Agile at Scale” objectives, for example enov8 Environment Manager).
Looking for some ideas on the best tools to help you in the complex world of CICD (Continuous Integration and Contininuous Delivery) then visit Guru99’s latest Top 20 list for CICD.
CICD Test Environment Management

The 3 Ms of Test Environment Management

Why do we need Test Environment Management Solutions?

Test environment management can be defined as Managing, Monitoring and Maneuvering the test environments of an organization as per their fast-changing IT requirements. 
And these 3 M’s become the pillars of any successful test environment management solution.  

In today's fast moving digital world we require vibrant test environment management practices that will help organizations increase their performance, improve analytics &  better control through a combination of automation and real-time decision making. 

The 3 Ms of Test Environment Management

History of TEM solutions

Before Test Environment Solutions came into market, Test environment were being managed by Test Environment teams using manual ways predominantly by using excel sheets. When somebody would need an environment or information related to it, they would just either refer to excel spreadsheets or set of applications to calculate their answers to their problems. This always ended up as a time-consuming exercise.
These were the most common challenges which came due to all manual process for test environment management:

  • Time Consuming – It took a lot of manual effort to maintain & collate the information as per requirements from stake holder or end users.
  • People Dependent – Most of the information was maintained by group of people or teams. To get information always had a person dependency.
  • Lack of Visibility – Teams working across locations or on various projects didn’t have much insight on the changes or development happening in applications within the same organization. This sometimes resulted in production failures.
  • Multiple Tool Set – Most of the organizations have different tools to manage different set of information and activities like SharePoint for document management, CMDB tools etc. So, everybody would need access or knowledge of these tools to get the right information.

So, what should a good test environment solutions look like?

Here are some of the key features which should be available in any solution to be successful:

  • Environment Modelling  – The solution should provide capability to manage information or refer to different sources of information so that it can be used as one stop shop for all knowledge needs and be the source of truth for the organization.
  • Planning & Scheduling – The solution should provide the required information and visibility for better planning & Co-ordination with ease. They should provide key information on contentions and risks that can impact the software release or project at a click of a button.
  • Project Demand Awareness – These solutions should provide organization in understanding the demand for test environment thus allowing them to take pro-active approach in planning delivery of IT.
  • Service Management – These solutions should have in built workflows which can be used for stream lining their operations and services.
  • Release Operations – These solutions should help in standardizing build, Deploy and Test activities while promoting existing application automation toolsets through seamless integration.
  • Centralized Status Accounting & Reporting - By moving away from your manual spreadsheets, emails and other makeshift methods, these solutions should help organisation with effectively reporting and dashboards on various aspects of environment management including activity, performance, usage & availability. 

In summary, the solution should act as an umbrella on all your IT framework and tools which will allow the organisation to leverage the best outcome from the existing capabilities and promote transparency, control and productivity

Badly cooked Environments

The DevOps Chef

A  good article / insight from Enov8 on a recent customer meeting.

Badly cooked Environments

Badly cooked Test Environments


No you cant automate your End to End Test Environment Creation if you:

  1. Don’t know what your Systems look like &
  2. Don’t know the Operational Steps

 
Tip! Learn to walk before you run. Get the basics right first.
 
 
 

Our Site Release

Test Environment Management (dot) com is pleased to announce the release of its new website. Our website will be used to collate key Test Environment Management & TestOps information and News.
NEW TEM WEBSITE