Data Environments Evil Twin

Data! Environments Evil Twin – Top Breaches 2017

Preamble

Data is without doubt the evil twin when it comes to managing your Environments. It is just so complicated both internally, within an individual database and at a more holistic level i.e. where data and the inherent relationships span the organization.

A complexity that ultimately exposes the organization to all kinds of challenges, one of the main ones being the likelihood unwanted information (for example Customer Personally Identifiable Information) will appear (or leak into) the wrong places for example your Development, Integration and Test Environments* or worse still onto the public internet. A sub-optimal situation when one considers 70% (Source Gartner) of breaches are committed internally.

Tip*: Don’t ignore Non-Production, as that’s where projects spend 95% of their time.

Anyhow here's a post from TJ Simmons on some of the top breaches from last year.

Top 5 Data Breaches of 2017

Back in September 2017, news broke about a group called OurMine hacking Vevo. OurMine got away with 3.12TB of Vevo's data and posted it online. That's when I came across this comment on Gizmodo:

Gizmo Data Hacking

That comment says it all. Every other day, it seems like another company is on the receiving end of a data breach. Most security experts will tell you that the data breaches we know of so far are merely the tip of the iceberg. One expert noted in 2015 that we should expect larger hacks to keep happening. And larger hacks have kept happening, leading to bigger and messier data breaches.

The following is a compilation of the worst data breaches of 2017. Usually, companies discover data breaches long after the actual breach impact. Therefore, I will organize these breaches in chronological order based on when the data breach was made known, not based on the date the breach actually happened. Just like how I selected the top IT outages of 2017, I selected the following five breaches based on the impact to the customers and to the affected business' reputations.

  1. Xbox 360 ISO and PSP ISO (Reported: Feb. 1, 2017 | Breach Happened: ~2015)

The forum websites Xbox 360 ISO and PSP ISO host illegal video game download files. They also house sensitive user information such as email IDs and hashed passwords. According to HaveIBeenPwned, a website that helps users check if their personal data has been compromised, Xbox 360 ISO and PSP ISO had a combined 2.5 million compromised user accounts. The attacks happened way back in September 2015 and no one discovered them until February 2017. The compromised information consisted of stolen email IDs, IP addresses of the users, and salted MD5 password hashes.

The biggest takeaway for consumers: avoid shady websites like Xbox 360 ISO. Trusting websites that host illegal material with your personal information is dangerous. On the plus side, at least both websites hashed their users' passwords. If your website is holding onto users' passwords, please implement the most basic of security measures by hashing those passwords.

  1. Deloitte (Reported: September 2017 | Breach Happened: March 2017)

In the same year that Gartner named Deloitte "number one in security consulting for the fifth consecutive year," Deloitte faced an embarrassing data breach. Deloitte, a multinational professional services firm, saw its clients' confidential emails exposed when a hacker gained unauthorized access to its company email system. The affected clients included blue-chip companies, well-known firms, and US government departments. What caused the hack? Apparently, the main administrator account required a single password and Deloitte had not instituted two-step verification for that account.

The Deloitte hack is a great example of how even the most security-conscious firms can make security missteps. We need to learn from Deloitte by identifying and eliminating all possible loopholes in our own companies' IT setups, the sooner the better.

  1. Equifax (Reported: September 2017 | Breach Happened: July 2017)

If you asked a regular Joe if they can recall any major data breach from 2017, chances are they will cite this one. Equifax, one of the top three US credit agencies, suffered a breach affecting nearly 143 million consumers. Given the sensitivity of the stolen data and the number of people affected, this breach has been considered "one of the worst data breaches ever."  The stolen data included Social Security numbers, driver’s license numbers, full names, addresses, dates of birth, and credit card numbers.

In response to the breach, Equifax set up a website called equifaxsecurity2017.com to which it directed consumers who wanted to know if their data had been stolen. Some users reported that the website did not work. And many were angry to find out that in order to use the website, they would have to agree to an arbitration clause stating that they would waive their rights to a class-action lawsuit. Some users even tried entering fake names and fake Social Security numbers, and the website's response—informing them they "may be affected by the breach"—increased skepticism about the website's validity.

The bigger your organization, the more information you have. If the information you have is sensitive, you will become a bigger target for data breaches. Equifax holds a great volume of highly sensitive information. This should lead to a corresponding increase in security measures, but clearly there is a gap between what should be done and reality. Learn from Equifax and be honest when assessing your existing security measures. Is there a gap between where they are and where they ought to be?

  1. Yahoo! Update (Reported in 2016 | Updated on October 9, 2017 | Breach happened: ~2013)

Yahoo! had already reported this breach back in December 2016, revealing that "one billion user accounts were compromised" in a 2013 hack. Turns out they underestimated the impact of the breach in the original report. Then-CEO Marissa Meyer subsequently told Congress that the 2013 data breach had affected all three billion of their user accounts. In other words, every single Yahoo! user account from popular services such as email, Tumblr, Fantasy, and Flickr suffered from the breach. After extensive investigations, the culprits' identities are still unknown.

Yahoo! is a classic case of a company with so many interdependent services that the complexity gives hackers opportunities to exploit. Notice how Yahoo! is still unable to identify the culprit? That speaks volumes to the challenges facing companies with a wide range of systems. You cannot plug the loopholes that you don't know exist. In other words, you need to start by knowing exactly how your internal IT systems work together.

  1. Uber (Self-Reported: November 21, 2017 | Breach happened: ~2016)

On November 21, Uber CEO Dara Khosrowshahi published a post revealing that he had become aware of a late-2016 incident in which "two individuals outside the company had inappropriately accessed user data stored on a third-party cloud-based service" that Uber uses. This breach affected 57 million users and further damaged Uber's already faltering brand. The stolen information included names, email addresses, and phone numbers.  Approximately 600,000 of the 57 million affected users were drivers who had their license numbers accessed by hackers.

What made things worse was that Uber paid the hackers 100,000 dollars to destroy the stolen data. As a result, Uber got even more bad press. Uber is an ambitious startup chasing high growth rates that emphasizes scale and agility above everything else. But in hindsight, they clearly needed to emphasize security as well. With consumers becoming more data-aware, security-conscious companies can gain an edge on their competitors.

Conclusion: Manage Data Security? Get Clarity on Your Company's Systems First

Given the complexity of enterprise IT systems, hackers can now find more loopholes to get past your company's security measures. Therefore, having a clear big picture on how your systems work together is a security priority. We've seen how even the biggest and most security-conscious firms (remember Deloitte?) can fall prey to data breaches, precisely because their complexity makes it much harder for them to identify and prevent hacks. With that in mind, consider an enterprise-level dashboard that can show you that big picture vision that will help both your security and your productivity.

As Peter Drucker once said about executive effectiveness, "you can't manage what you can't measure." Most organizations will have some way to measure security readiness. But do they have a way to measure and make visible how their systems work together? Knowing how your systems work together makes you better prepared to identifying root causes for potential hacks. If you can plug the gaps before they can be exploited, you can reach zero outage and zero breaches.

Author TJ Simmons

TJ Simmons started his own developer firm five years ago, building solutions for professionals in telecoms and the finance industry who were overwhelmed by too many Excel spreadsheets. He’s now proficient with the automation of document generation and data extraction from varied sources.

Smelly Test Environments

Smelly Environments

Kent beck, the inventor of XP extreme programming, once wrote about the concept of “Smelly Code”.
Given IT Environments seem to be a significant challenge for so many organizations,

It got us thinking:
What might “Smelly IT Environments” smell of?

In a nutshell, here is our top 8 list of things that indicate your Test Environments smell like a Kipper.

  1. Use of excel spreadsheets to manage & track your Test Environments
  2. A CMDB that only ever focuses on the Hardware Components
  3. Too many Test Environment System Instances or Components (causing overspend)
  4. Too few Test Environment System Instances or Components (causing contention)
  5. Inability to identify current and future Test environment demand or usage (no test environment bookings)
  6. Lack of End to End Test Data Integrity (causing testing to fail and grind to a halt)
  7. Inconsistent & Slow IT Environment operations (heroics and manual intervention)
  8. Manual information gathering & reporting (typically the use of PowerPoint)

Ref: Edited from Original Source smelly-environments

Full Scaled versus Scaled Down

The Art of Scaled down Performance Tests Environment

The Challenge

A challenge always faced in organizations is the decision to fund the cost of a production sized test environment or look for more cost effective alternatives.

A decision that can be somewhat “heated” between Quality Practitioners and  Corporate “Realists”.

In way of “distilling” the argument, I thought I’d summarize the upsides and downsides of “Fully Scaled” versus “Scaled Down”.

Full v Scaled Table

Scale

Full Sized Test Environment

Scaled Down Test Environment

Pro

Resembles production

Faster to setup

Pro

Allows for production loads

Much cheaper

Pro

Same Code (similar config)

Same Code (different config)

Pro

Production like insights

Some insights

Con

Cost Prohibitive.

Unable to exercise and/or detect all issues.

Con

Takes to long to provision.

Test Data volumes have to compromise also. 

Con

Even a full sized environment wont be 100% production like. To many subtle differences like network componentery.

Assumes that application and its components (vertical & horizontal) scale “relatively” linearly. This assumption is often very wrong, resulting in skewed results.

Best Practice Tips

Despite the two alternatives, organization will typically gravitate to the latter due to budget constraints. With this in mind, here are 4 best practices (tips) that you might apply to reduce your risks and ensure positive & qualatative outcomes when building a “Performance Test Environment”.

  • Acknowledge the difference – And ensure risk is understood by all decision makers.
  • Keep the test environment consistent – Consistency will allow you to baseline & trend.
  • Performance Model – Bridge the gap & supplement your results with modelling & extrapolation. Understand application behaviour differences as you scale vertically or horizontally.
  • Leverage Cloud – Use a public cloud to build a temporary Performance Test Environment
Scaled v Non Scaled

Summary

It is dangerous to recommend a “scaled down environment” as it is always a compromise and “when things go wrong” they often go wrong in “fantastic” style. However, the reality is most organizations can’t afford to spend millions of dollars to support an application or project. As such scaling down may be the only feasible option and if that is the case then take into consideration the tips specified here.

Independent of your choice, remember that no matter how “production” like your Test Environments are, there are always differences and you should be prepared to continually understand these deltas and refine accordingly. Ultimately it is as much an Art as it is a Science.

About the Author

Jane Temov (author) is a Senior Environment Architect at Enov8. She has over 15 years of experience working across the IT industry in Europe & APAC and specializes in Technical Testing, Test Environment Management, Test Data and Release Operations.