Category Archives: breach

Backups – the apparently forgotten craft

At the dawn of my career, I worked in two different old-school computer mainframe operations organizations. We spent a considerable amount of time (I’m estimating 20%) doing backups. Sure, computers were a lot slower then, and we had a lot less data.

We did backups for a reason: things happen. All kinds of things, like hardware failures, software bugs, accidents, mistakes, small and large disasters, and more. I can recall numerous times when we had to recover individual files, complete databases, and ground-up (“bare metal”) recoveries to get things going again.

We didn’t wait for these scenarios to occur to see whether we could do any of these types of restores. We practiced, regularly. In one mainframe shop early in my career, we completely restored our OS every Sunday night. Okay, this was in part a storage defragmentation measure and performed mainly for this purpose. However, we were still doing a bare metal restoration, precisely like what we would do if our data center burned down and we had to recover our data and applications on a different system.

Was this exciting work? Certainly not.

Interesting? Not in the least.

Essential? Absolutely.

So what am I getting at here? Am I merely reminiscing about the good old days? Hardly.

I’m talking about ransomware. At times, it’s difficult for me to sympathize with the organizations that are victims of ransomware. It’s hard for me to rationalize why an organization would even remotely consider paying a ransom (particularly when the FBI reported that only about half of organizations would be able to decrypt their data when they paid the ransom) (sorry, I cannot find the link to that advisory, I’ll keep looking and update this article when I find it).

A survey by Kaspersky indicated some facts that shocked me:

  • 37 percent of respondents were unable to accurately define ransomware let alone understand the damage it can deliver.
  • Of those survey respondents who suffered a ransomware attack, 40 percent said they would not know the immediate steps to take in response.

I’m amazed by these results. Do IT organizations no longer understand IT operations fundamentals that have been around for decades? I hate to sound harsh, but if this is the case, organizations deserve the consequences they experience when ransomware (or human error, or software bugs, etc.) strikes.

That said…. I am acutely aware that it can be difficult to find good IT help these days. However, if an organization is crippled by ransomware, they’ve already gone “all-in” with information technology, but neglected to implement common safeguards like data backup.

(image courtesy recordnations.com)

Blameshifting is not a good defense strategy

Today, in my newsfeed, two stories about breaches caught my attention.

In this first story, a class-action lawsuit is filed against Waste Management for failing to protect PII in the trash.

In the second story, a South Africa port operator declares force majeure as a result of a security breach, in essence declaring that nothing could have been done to prevent the breach.

In both of these situations, the victim organizations attempt to shift the blame for the failure to protect sensitive information on others. Those of us in the profession have seen victim companies do this in the past, and it generally does not work out well.

Back to the Waste Management story. Here, I argue that when an organization places something in dumpsters, they should take steps to ensure that no recoverable information is discarded. Unless the business arrangement between companies and Waste Management is more akin to secure document shredding by companies like Shred-It, I assert that the garbage company has no obligation to protect anything of value placed in the trash.

On to the South Africa port operator. Declaring force majeure as a result of a security breach is an interesting tactic. In my opinion (note that I am not a lawyer and have had no formal law education), this strategy will backfire. There are numerous examples of organizations that successfully defend themselves against attacks like this. Granted, these attacks are a global plague, and defense – while challenging – is absolutely possible.

The Breaches Will Continue

As I write this, it’s been one day since news of the latest LinkedIn breach hit the news. To summarize, about 92% of LinkedIn users’ information was leaked via LinkedIn’s API. LinkedIn is officially denying this is a breach but is just a data scrape that violated the API’s terms of use. Interesting twist of terms. This reminds me of a former President who explained, “It depends upon what your definition of IS is.”

During my six years as a strategic cybersecurity consultant, I learned that most organizations do not take cybersecurity seriously. Breaches are things that happen to other companies, those with larger and more valuable troves of data.

Organizations, up to and including boards of directors, are locked in normalcy bias. No breach has occurred (that they are aware of), and therefore no breach will occur in the future. It is normalcy bias that is also responsible for the fact that most citizens fail to prepare for emergencies such as extended power outages, fires, floods, hurricanes, tornadoes, identity theft, serious illness, and so many other calamities. I’d be lying if I said that I’m immune to this: while we’re well prepared for some types of events, our preparedness could be far better in certain areas.

A fellow security leader once told me, “Cybersecurity is not important until it is.” Like in our personal lives, we don’t implement the safeguards until after being bitten. Whether it’s security cameras, better locks, bars on windows, bear spray, or better cyber defenses, it’s our nature to believe we’re not a target and that such safeguards are unnecessary. Until they are.

FPRM is the new TPRM

The recent Accellion-related breaches (a recent one here) are shining a light not just on third party risk management (TPRM), but fourth party risk management (FPRM).

When we bring on a new service provider, in a healthy TPRM program, we assess the service provider’s security (and maybe privacy) programs to see whether their security posture is something we can live with. I see a new set of questions to be asking our third parties, including:

  • What third-party service providers do your third-parties send your data to?
  • What third-party service providers are used to facilitate data transfer and other aspects of your service?

TPRM managers – these recent incidents should be sending us back into our methodologies to ensure we don’t have blind spots.

That is all.

Prior password hygiene comes home to roost

This week I received a notice from https://haveibeenpwned.com/ suggesting that my user account from last.fm had been compromised. In this case, the breach was fairly significant, according to Have I Been Pwned, indicating that mail addresses, passwords, usernames,  and website activity were among the compromised data.

Image result for password memeWow. Last.fm. I hadn’t even thought of that service in years. A quick check at Wikipedia shows they are still in business, but I had forgotten about last.fm, probably because SomaFM.com and Pandora had garnered my music listening attention.

I looked in my password vault to see what my password was.  I found there was no entry for last.fm. This is especially troubling, since there is a possibility that the password I used for last.fm is used elsewhere (more on that in a minute).  I still have one more password vault to check, but I don’t have physical access to that until tomorrow. Hopefully I’ll find an entry.

In any event, I’ve changed my password at last.fm.  But not knowing what my prior password was is going to gnaw at me for a while.

Occurrences like this are another reason why we should all use unique, hard to guess passwords for each web site.  Then, if any web site is compromised and that compromise reveals your password, then you can be confident that no other web sites are affected.

Security: Not a Priority for Retail Organizations

Several years ago, VISA announced a “liability shift” wherein merchants would be directly liable for credit card fraud on magstripe card transactions. The deadline for this came and went in October, 2015, and many merchants still didn’t have chip reader terminals. But to be fair to retailers, most of the credit/debit cards in my wallet are magstripe only, so it’s not ONLY retailers who are dragging their feet.

My employment and consulting background over the past dozen years revealed plainly to me that retail organizations want to have as little to do with security as possible. Many, in fact, even resist being compliant with required standards like PCI DSS. For any of you who are unfamiliar with security and compliance, in our industry, it is well understood that compliance does not equal security – not even close to it.

I saw an article today, which says it all. A key statement read, “There is a report that over the holidays several retailers disabled the EMV (Chip and Pin) functionality of their card readers. The reason for this? They did not want to deal with the extra time it takes for a transaction. With a standard card swipe (mag-swipe) you are ready to put in your pin and pay in about three seconds. With EMV this is extended to roughly 10 seconds.” Based on my personal and professional experience with several retail organizations, I am not surprised by this.  Most retailers just don’t want to have to do security at all. You, shoppers, are the ones who pay the price for it.

IT Lacks Engineering Discipline and Rigor

Every week we read the news about new, spectacular security breaches. This has been going on for years, and sometimes I wonder if there are any organizations left that have not been breached.

Why are breaches occurring at such a clip? Through decades of experience in IT and data security, I believe I have at least a part of the answer. But first, I want to shift our focus to a different discipline, that of civil engineering.

Civil engineers design and build bridges, buildings, tunnels, and dams, as well as many other things. Civil engineers who design these and other structures have college degrees, and they have a license called a Professional Engineer. In their design work, they carefully examine every component and calculate the forces that will act upon it, and size it accordingly to withstand expected forces, with a generous margin for error, to cover unexpected circumstances. Their designs undergo reviews before their plans can be called complete.  Inspectors carefully examine and approve plans, and they examine every phase of site preparation and construction. The finished product is inspected before it may be used.  Any defects found along the way, from drawings to final inspection, results in a halt in the project and changes in design or implementation.  The result: remarkably reliable and long-lasting structures that, when maintained properly, provide decades of dependable use. This practice has been in use for a century or two and has held up under scrutiny. We rarely hear of failures of bridges, dams, and so on, because the system of qualifying and licensing designers and builders, as well as design and construction inspections works. It’s about quality and reliability, and it shows.

Information technology is not anything like civil engineering. Very few organizations employ formal design with design review, nor inspections of components as development of networks, systems, and applications. The result: systems that lack proper functionality, resilience, and security. I will explore this further.

When organizations embark to implement new IT systems – whether networks, operating systems, database management systems, or applications – they do so with little formality of design, and rarely with any level of design or implementation review.  The result is “brittle” IT systems that barely work. In over thirty years of IT, this is the norm that I have observed in over a dozen organizations in several industries, including banking and financial services.

In case you think I’m pontificating from my ivory tower, I’m among the guilty here. Most of my IT career has been in organizations with some ITIL processes like change management, but utterly lacking in the level of engineering rigor seen in civil engineering and other engineering disciplines.  Is it any wonder, then, when we hear news of IT project failures and breaches?

Some of you will argue that IT does not require the same level of discipline as civil or aeronautical engineering, mostly because lives are not directly on the line as they are with bridges and airplanes. Fine. But, be prepared to accept losses in productivity due to code defects and unscheduled downtime, and security breaches. If security and reliability are not a part of the design, then the resulting product will be secure and reliable by accident, but not intentionally.

The security breaches continue

As of Tuesday, September 2, 2014, Home Depot was the latest merchant to announce a potential security breach.

Any more, this means intruders have stolen credit card numbers from its POS (point of sale) systems. The details have yet to be revealed.

If there is any silver lining for Home Depot, it’s the likelihood that another large merchant will probably soon announce its own breach.  But one thing that’s going to be interesting with Home Depot is how they handle the breach, and whether their CEO, CIO, and CISO/CSO (if they have a CISO/CSO) manage to keep their jobs. Recall that Target’s CEO and CIO lost their jobs over the late 2013 Target breach.

Merchants are in trouble. Aging technologies, some related to the continued use of magnetic stripe credit cards, are making it easier for intruders to steal credit card numbers from merchant POS systems.  Chip-and-PIN cards are coming (they’ve been in Europe for years), but they will not make breaches like this a thing of the past; rather, organized criminal organizations, which have made a lot of money from recent break-ins, are developing more advanced technologies like the memory scraping malware that was allegedly used in the Target breach. You can be sure that there will be further improvements on the part of criminal organizations and their advanced malware.

A promising development is the practice of encrypting card numbers in the hardware of the card reader, instead of in the POS system software.  But even this is not wholly secure: companies that manufacture this hardware will themselves be attacked, in the hopes that intruders will be able to steal the secrets of this encryption and exploit it. In case this sounds like science fiction, remember the RSA breach that was very similar.

The cat-and-mouse game continues.

Internal Network Access: We’re Doing It Wrong

A fundamental design flaw in network design and access management gives malware an open door into organizations.

Run the information technology clock back to the early 1980s, when universities and businesses began implementing local area networks. We connected ThinNet or ThickNet cabling to our servers and workstations and built the first local area networks, using a number of framing technologies – primarily Ethernet.

By design, Ethernet is a shared medium technology, which means that all stations on a local area network are able to communicate freely with one another. Whether devices called “hubs” were used, or if stations were strung together like Christmas tree lights, the result was the same: a completely open network with no access restrictions at the network level.

Fast forward a few years, when network switches began to replace hubs. Networks were a little more efficient, but the access model was unchanged – and remains so to this day. The bottom line:

Every workstation has the ability to communicate with every other workstation on all protocols.

This is wrong. This principle of open internal networks goes against the grain of the most important access control principle: deny access except when explicitly required. With today’s internal networks, there is no denial at all!

What I’m not talking about here is the junction between workstation networks and data center networks. Many organizations have introduced access control, primarily in the form of firewalls, and less often in the form of user-level authentication, so that internal data centers and other server networks are no longer a part of the open workstation network. That represents real progress, although many organizations have not yet made this step. But this is not the central point of this article, so let’s get back to it.

There are two reasons why today’s internal networks should not be wide open like most are now. The first reason is that it facilitates internal resource sharing. Most organizations have policy that prohibits individual workstations from being used to share resources with others. For instance, users can set up file shares and also share their directly-connected printers to other users. The main reason this is not a great idea is that these internal workstations contribute to the Shadow-IT problem by becoming non-sanctioned resources.

The main objection to open internal networks is that they facilitate the lateral movement of malware and intruders. For fifteen years or more, tens of thousands of organizations have been compromised by malware that self-propagates through internal networks. Worms such as Code Red, Nimda, Slammer, and Blaster scan internal networks to find other opportunities to infect internal systems. Attackers who successfully install RATs (remote access Trojans) on victim computers can scan local networks to enumerate internal networks and select additional targets. Today’s internal networks are doing nothing to stop these techniques.

The model of wide-open access needs to be inverted, so that the following rules of network access are implemented:

  1. Workstations have no network access with each other.
  2. Workstations have access ONLY to servers and services as required.

This should be the new default; this precisely follows the access control principle of deny all except that which is specifically required.

Twenty years ago, this would have meant that all workstation traffic would need to traverse firewalls that would made pass or no-pass decisions. However, in my opinion, network switches themselves are the right place to enact this type of access control.

Why wait for a security breach to improve security?

Neiman Marcus is the victim of a security breach. Neiman Marcus provided a statement to journalist Brian Krebs:

Neiman Marcus was informed by our credit card processor in mid-December of potentially unauthorised payment card activity that occurred following customer purchases at our Neiman Marcus Group stores.

We informed federal law enforcement agencies and are working actively with the U.S. Secret Service, the payment brands, our credit card processor, a leading investigations, intelligence and risk management firm, and a leading forensic firm to investigate the situation. On January 1st, the forensics firm discovered evidence that the company was the victim of a criminal cyber-security intrusion and that some customers’ cards were possibly compromised as a result.

We have begun to contain the intrusion and have taken significant steps to further enhance information security.

The security of our customers’ information is always a priority and we sincerely regret any inconvenience. We are taking steps, where possible, to notify customers whose cards we know were used fraudulently after making a purchase at our store.

I want to focus on one of Neiman Marcus’ statements:

We have … taken significant steps to further enhance information security.

Why do companies wait for a disaster to occur before making improvements that could have prevented the incident – saving the organization and its customers untold hours of lost productivity? Had Neiman Marcus taken these steps earlier,  the breach might not have occurred.  Or so we think.

Why do organizations wait until a security incident occurs before taking more aggressive steps to protect information?

  1. They don’t think it will happen to them. Often, an organization eyes a peer that suffered a breach and thinks, their security and operations are sloppy and they had it coming. But alas, those in an organization who think their security and operations are not sloppy are probably not familiar with their security and operations. In most organizations, security and systems are just barely good enough to get by. That’s human nature.
  2. Security costs too much. To them I say, “If you think prevention is expensive, have you priced incident response lately?”
  3. We’ll fix things later. Sure – only if someone is holding it over your head (like a payment processor pushing a merchant or service provider towards PCI compliance). That particular form of “later” never comes. Kicking the can down the road doesn’t solve the problem.

It is human nature to believe that another’s misfortunes can’t happen to us. Until it does.