A new way of thinking about security incident prevention and response, called Assumption of Breach, is leading security professionals to think differently about security incidents. Prior to assumption of breach, the popular mindset among security professionals was to prevent security breaches from occurring. With assumption of breach, security professionals adopt the mindset that one or more breaches have potentially occurred in their organizations, whether those breaches have been discovered or not.
In my opinion, this is a more realistic philosophy than prior ways of thinking. Adversaries wield advanced tools and techniques, and are often able to compromise networks with even advanced defenses. Assumption of breach also requires humility on the part of security managers and executives, who might otherwise believe that their networks are impenetrable.
– excerpt from CISSP Guide to Security Essentials, 2nd edition
For more information on the topic of Assumption of Breach:
http://searchsecurity.techtarget.com/tip/Assumption-of-breach-How-a-new-mindset-can-help-protect-critical-data (free registration required)
I am visiting Reno, Nevada after quite a long absence. I’m here to speak at a professional event on the topic of human factors and data security (in other words, why we continue to build unsecure systems).
My IT career started here in Reno, with on-campus jobs in computing (operations and software development), and later in local government, banking, and casino gaming. Each job built on the last, and I gained positions with greater responsibility, creativity, and rewards of various sorts.
I buried my young son in Reno – it seems like many lifetimes ago. He was my first stop. Time is a great healer – you’ll have to trust me on this one, if you have recently suffered a big loss.
I looked up a couple of long-time friends, but waited until the last minute. They’re probably busy with their own lives today.
Done with my coffee stop and time to check in to my hotel. My talk is tonight, and then I’m back on the road tomorrow with other stops in the Pacific Northwest.
Business continuity and disaster recovery planning professionals rely on well-known metrics that are used to drive planning of emergency operations procedures and continuity of operations procedures. These metrics are:
- Maximum Tolerable Downtime (MTD) – this is an arbitrary time value that represents the greatest period of time that an organization is able to tolerate the outage of a critical process or system without sustaining permanent damage to the organization’s ongoing viability. The units of measure are typically days, but can be smaller (hours, minutes) or larger (weeks, months).
- Recovery Point Objective (RPO) – this is a time value that represents the maximum potential data loss in a disaster situation. For example, if an organization backs up data for a key business process once per day, the RPO would be 24 hours. This should not be confused with recovery time objective.
- Recovery Time Objective (RTO) – this is a time value that represents the maximum period of time that a business process or system would be incapacitated in the event of a disaster. This is largely independent of recovery point objective, which is dependent on facilities that replicate key business data to another location, preserving it in case the primary location suffers a disaster that damages business data.
- Recovery Consistency Objective (RCO) – expressed as a percentage, this represents the maximum loss of data consistency during a disaster. In complex, distributed systems, it may not be possible to perfectly synchronize all business records. When a disaster occurs, often there is some inconsistency found on a recovery site where some data is “fresher” than other data. Different organizations and industries will have varying tolerances for data consistency in a disaster situation.
In my research on the topic of business continuity planning and disaster recovery planning, I have come across a standard metric that represents the capacity for a recovery system to process business transactions, as compared to the primary system. In professional dealings I have encountered this topic many times.
A new metric is proposed that is used to establish and communicate a recovery objective that represents the capacity of a recovery system:
- Recovery Capacity Objective (RCapO) – expressed as a percentage, this represents the capacity of a recovery process or system as compared to the primary process or system.
Arguments for this metric:
- Awareness. The question of recovery system capacity is not consistently addressed within an organization or to the users of a process or system.
- Consistency. The adoption of a standard metric on recovery system capacity will facilitate adoption of the metric.
- Planning. The users of a process or system can reasonably anticipate business conditions should a business process or system suffer a disaster that results in the implementation of emergency response procedures.
Neiman Marcus is the victim of a security breach. Neiman Marcus provided a statement to journalist Brian Krebs:
Neiman Marcus was informed by our credit card processor in mid-December of potentially unauthorised payment card activity that occurred following customer purchases at our Neiman Marcus Group stores.
We informed federal law enforcement agencies and are working actively with the U.S. Secret Service, the payment brands, our credit card processor, a leading investigations, intelligence and risk management firm, and a leading forensic firm to investigate the situation. On January 1st, the forensics firm discovered evidence that the company was the victim of a criminal cyber-security intrusion and that some customers’ cards were possibly compromised as a result.
We have begun to contain the intrusion and have taken significant steps to further enhance information security.
The security of our customers’ information is always a priority and we sincerely regret any inconvenience. We are taking steps, where possible, to notify customers whose cards we know were used fraudulently after making a purchase at our store.
I want to focus on one of Neiman Marcus’ statements:
We have … taken significant steps to further enhance information security.
Why do companies wait for a disaster to occur before making improvements that could have prevented the incident – saving the organization and its customers untold hours of lost productivity? Had Neiman Marcus taken these steps earlier, the breach might not have occurred. Or so we think.
Why do organizations wait until a security incident occurs before taking more aggressive steps to protect information?
- They don’t think it will happen to them. Often, an organization eyes a peer that suffered a breach and thinks, their security and operations are sloppy and they had it coming. But alas, those in an organization who think their security and operations are not sloppy are probably not familiar with their security and operations. In most organizations, security and systems are just barely good enough to get by. That’s human nature.
- Security costs too much. To them I say, “If you think prevention is expensive, have you priced incident response lately?”
- We’ll fix things later. Sure – only if someone is holding it over your head (like a payment processor pushing a merchant or service provider towards PCI compliance). That particular form of “later” never comes. Kicking the can down the road doesn’t solve the problem.
It is human nature to believe that another’s misfortunes can’t happen to us. Until it does.
At the time of this writing, the Target breach is in the news, and the magnitude of the Target breach has jumped from 40 million to as high as 110 million.
More recently, we’re now hearing about a breach of Neiman Marcus.
Of course, another retailer will be the next victim. It is not so important to know who that will be, but why.
Retailers are like herds of gazelles on the African plain, and cybercriminals are the lions who devour them.
As lions stalk their prey, sometimes they choose their victim early and target them. At other times, lions run into the herd and find a target of opportunity: one that is a little slower than the rest, or one that makes a mistake and becomes more vulnerable. The slow, sick ones are easy targets, but the healthy, fatter ones are more rewarding targets.
As long as their are lions and gazelles, there will always be victims.
As long as there are retailers that store, process, or transmit valuable data, there will always be cybercriminals that attempt to steal that data.
The information security profession, and cryptography in particular, has passed into a new era where credible evidence has surfaced that reveal that several world governments have played a role in the deliberate weakening of cryptosystems, to facilitate domestic and international espionage. Prior to these revelations, information security professionals could place their trust in national standards bodies, major encryption product vendors, and government organizations. This trust has been broken and will not be easily mended.
A significant challenge in both public and private sectors will be the establishment of new ways to measure the validity and integrity of cryptosystems. Or, perhaps a new approach will be new and novel uses of cryptography in order to make the compromise of a cryptosystem more difficult than before. The collective discussion on this topic will run its course over several years, resulting in the development of new validation platforms as well as improved application of cryptosystems.
– excerpt from the cryptography chapter of a college textbook still in development
Computer systems, databases, and storage and retrieval systems contain information that has some monetary or intrinsic value. After all, the organization that has acquired and set up the system has expended valuable resources to establish and operate the system. After undergoing this effort, one would think that the organization would wish to control who can access the information that it has collected and stored.
Access controls are used to control access to information and functions. In simplistic terms, the steps undertaken are something like this:
- Reliably identify the subject (e.g., the person, program, or system)
- Find out what object (e.g., information or function) the subject wishes to access
- Determine whether the subject is allowed to access the object
- Permit (or deny) the subject’s access to the object
The actual practice of access control is far more complex than these five steps. This is due primarily to the high-speed, automated, complex, and distributed nature of information systems. Even in simple environments, information often exists in many forms and locations, and yet these systems must somehow interact and quickly retrieve and render the desired information, without violating any access rules that are in place. These same systems must also be able to quickly distinguish “friendly” accesses from hostile and unfriendly attempts to access—or even alter—this same information.
The success of an access control system is completely dependent upon the effectiveness of the business processes that support it. User access provisioning, review, and revocation are key activities that ensure only authorized persons may have access to information and functions.
– excerpt from an upcoming textbook on information systems security