Category Archives: Risks

What I Was Doing On 9/11/2001

In 2001, I was the security strategist for a national wireless telecommunications company. I usually awoke early to read the news online, and on September 11 I was in my home office shortly after 5:00am Pacific Time.  I was perusing corporate e-mail and browsing the news, when I saw a story of a plane crashing into a building in New York.

I had a television in the home office, and I reached over to turn it on. I tuned to CNN and watched as smoke poured from one of the two towers in the background, as two commentators droned on about what this could be about. While watching this I saw the second airliner emerge from the background and crash into the second tower.

Like many, I thought I was watching a video loop of the first crash, but soon realized I was watching live TV.

I e-mailed and IM’d members of our national security team to get them aware of these developments. Before 6am Pacific time, we had our national emergency conference bridge up and running (and it would stay on all day). Very soon we understood the gravity of the situation, and wondered what would happen next.  We were a nation under attack and needed to take steps to protect our business.  Within minutes we had initiated a nationwide lockdown (I cannot divulge details on what that means), and over the next several hours we took more steps to protect the company.


Since being a teen-ager I had a particular interest in World War Two. My father was a bombardier instructor, and his business partner and best friend was a highly decorated air ace.


We are under attack and we are at war, I thought to myself early that morning, and while I don’t remember specifics about our national conference bridge, I’m certain that I or someone else on the bridge said as much.  Like the sneak attack on Pearl Harbor on December 7, 1941, we all believed that the 9/11 attacks could have been the opening salvos of a much larger plan. Thankfully that was not the case. But in the moment, there was no way to know for sure.

For many days, I and probably a lot of Americans expected more things to happen. The fact that they didn’t was both a surprise and a relief.

Will 2016 Be The Year Of The Board?

This year has exploded out of the gate, starting on Jan 4 (the first business day of the year) with a flurry of activity. Sure, some of this is just new budget money that is available. However, I’m seeing a lot of organizations in my part of the world (California, Oregon, Washington, Idaho, Montana, Alberta, British Columbia, and Alaska) asking for help on the topic of communicating to executive management and the board of directors.

It’s about time.

Really, though, this makes sense.  Boards of directors aren’t interested in fads in business management. They rely upon their tried-and-true methods of managing businesses through board meetings, audit and risk committees, and meetings with executives. Until recently, board members perceived information security as a tactical matter not requiring their attention. However, with so many organizations suffering from colossal breaches, board members are starting to ask questions, which is a step in the right direction.

Let me say this again. Board members’ asking questions is a big sign of progress. And it doesn’t matter, mostly, what those questions are. It’s a sign they are thinking about information security, perhaps for the first time. And they’re bold enough to ask questions, even if they fear they are asking stupid questions.

The National Association of Corporate Directors (NACD) has an excellent publication on the topic of boards of directors attention on information security, called the Cyber Risk Oversight Handbook. Last I checked, a soft copy is free. Whether you are a board member or an infosec staffer, I highly recommend this for your reading list in early 2016.

In air travel and data security, there are no guarantees of absolute safety

The recent tragic GermanWings crash has illustrated an important point: even the best designed safety systems can be defeated in scenarios where a trusted individual decides to go rogue.

In the case of the GermanWings crash, the co-pilot was able to lock the pilot out of the cockpit. The cockpit door locking mechanism is designed to enable a trusted individual inside the cockpit from preventing an unwanted person from being able to enter.

Such safeguards exist in security mechanisms in information systems. However, these safeguards only work when those at the controls are competent. If they go rogue, there is little, if anything, that can be done to slow or stop their actions. Any administrator with responsibilities and privileges for maintaining software, operating systems, databases, or networks has near-absolute control over those objects. If they decide to go rogue, at best the security mechanisms will record their malevolent actions, just as the cockpit voice recorder documented the pilot’s attempts to re-enter the cockpit, as well as the co-pilot’s breathing, indicating he was still alive.

Remember that technology – even protective controls – cannot know the intent of the operator. Technology, the amplifier of a person’s will, blindly obeys.

Don’t let it happen to you

This is a time of year when we reflect on our personal and professional lives, and think about the coming years and what we want to accomplish. I’ve been thinking about this over the past couple of days… yesterday, an important news story about the 2013 Target security breach was published. The article states that Judge Paul A. Magnuson of the Minnesota District Court has ruled that Target was negligent in the massive 2013 holiday shopping season data breach. As such, banks and other financial institutions can pursue compensation via class-action lawsuits. Judge Magnuson said, “Although the third-party hackers’ activities caused harm, Target played a key role in allowing the harm to occur.” I have provided a link to the article at the end of this message.

Clearly, this is really bad news for Target. This legal ruling may have a chilling effect on other merchant and retail organizations.

I don’t want you to experience what Target is going through. I changed jobs at the beginning of 2014 to help as many organizations as possible avoid major breaches that could cause irreparable damage. If you have a security supplier and service provider that is helping you, great. If you fear that your organization may be in the news someday because you know your security is deficient (or you just don’t know), we can help in many ways.

I hope you have a joyous holiday season, and that you start 2015 without living in fear.

Internal Network Access: We’re Doing It Wrong

A fundamental design flaw in network design and access management gives malware an open door into organizations.

Run the information technology clock back to the early 1980s, when universities and businesses began implementing local area networks. We connected ThinNet or ThickNet cabling to our servers and workstations and built the first local area networks, using a number of framing technologies – primarily Ethernet.

By design, Ethernet is a shared medium technology, which means that all stations on a local area network are able to communicate freely with one another. Whether devices called “hubs” were used, or if stations were strung together like Christmas tree lights, the result was the same: a completely open network with no access restrictions at the network level.

Fast forward a few years, when network switches began to replace hubs. Networks were a little more efficient, but the access model was unchanged – and remains so to this day. The bottom line:

Every workstation has the ability to communicate with every other workstation on all protocols.

This is wrong. This principle of open internal networks goes against the grain of the most important access control principle: deny access except when explicitly required. With today’s internal networks, there is no denial at all!

What I’m not talking about here is the junction between workstation networks and data center networks. Many organizations have introduced access control, primarily in the form of firewalls, and less often in the form of user-level authentication, so that internal data centers and other server networks are no longer a part of the open workstation network. That represents real progress, although many organizations have not yet made this step. But this is not the central point of this article, so let’s get back to it.

There are two reasons why today’s internal networks should not be wide open like most are now. The first reason is that it facilitates internal resource sharing. Most organizations have policy that prohibits individual workstations from being used to share resources with others. For instance, users can set up file shares and also share their directly-connected printers to other users. The main reason this is not a great idea is that these internal workstations contribute to the Shadow-IT problem by becoming non-sanctioned resources.

The main objection to open internal networks is that they facilitate the lateral movement of malware and intruders. For fifteen years or more, tens of thousands of organizations have been compromised by malware that self-propagates through internal networks. Worms such as Code Red, Nimda, Slammer, and Blaster scan internal networks to find other opportunities to infect internal systems. Attackers who successfully install RATs (remote access Trojans) on victim computers can scan local networks to enumerate internal networks and select additional targets. Today’s internal networks are doing nothing to stop these techniques.

The model of wide-open access needs to be inverted, so that the following rules of network access are implemented:

  1. Workstations have no network access with each other.
  2. Workstations have access ONLY to servers and services as required.

This should be the new default; this precisely follows the access control principle of deny all except that which is specifically required.

Twenty years ago, this would have meant that all workstation traffic would need to traverse firewalls that would made pass or no-pass decisions. However, in my opinion, network switches themselves are the right place to enact this type of access control.

Recovery Capacity Objective: a new metric for BCP / DRP

Business continuity and disaster recovery planning professionals rely on well-known metrics that are used to drive planning of emergency operations procedures and continuity of operations procedures. These metrics are:

  • Maximum Tolerable Downtime (MTD) – this is an arbitrary time value that represents the greatest period of time that an organization is able to tolerate the outage of a critical process or system without sustaining permanent damage to the organization’s ongoing viability. The units of measure are typically days, but can be smaller (hours, minutes) or larger (weeks, months).
  • Recovery Point Objective (RPO) – this is a time value that represents the maximum potential data loss in a disaster situation. For example, if an organization backs up data for a key business process once per day, the RPO would be 24 hours. This should not be confused with recovery time objective.
  • Recovery Time Objective (RTO) – this is a time value that represents the maximum period of time that a business process or system would be incapacitated in the event of a disaster.  This is largely independent of recovery point objective, which is dependent on facilities that replicate key business data to another location, preserving it in case the primary location suffers a disaster that damages business data.
  • Recovery Consistency Objective (RCO) – expressed as a percentage, this represents the maximum loss of data consistency during a disaster. In complex, distributed systems, it may not be possible to perfectly synchronize all business records. When a disaster occurs, often there is some inconsistency found on a recovery site where some data is “fresher” than other data. Different organizations and industries will have varying tolerances for data consistency in a disaster situation.

In my research on the topic of business continuity planning and disaster recovery planning, I have come across a standard metric that represents the capacity for a recovery system to process business transactions, as compared to the primary system. In professional dealings I have encountered this topic many times.

A new metric is proposed that is used to establish and communicate a recovery objective that represents the capacity of a recovery system:

  • Recovery Capacity Objective (RCapO) – expressed as a percentage, this represents the capacity of a recovery process or system as compared to the primary process or system.

Arguments for this metric:

  • Awareness. The question of recovery system capacity is not consistently addressed within an organization or to the users of a process or system.
  • Consistency. The adoption of a standard metric on recovery system capacity will facilitate adoption of the metric.
  • Planning. The users of a process or system can reasonably anticipate business conditions should a business process or system suffer a disaster that results in the implementation of emergency response procedures.

Why wait for a security breach to improve security?

Neiman Marcus is the victim of a security breach. Neiman Marcus provided a statement to journalist Brian Krebs:

Neiman Marcus was informed by our credit card processor in mid-December of potentially unauthorised payment card activity that occurred following customer purchases at our Neiman Marcus Group stores.

We informed federal law enforcement agencies and are working actively with the U.S. Secret Service, the payment brands, our credit card processor, a leading investigations, intelligence and risk management firm, and a leading forensic firm to investigate the situation. On January 1st, the forensics firm discovered evidence that the company was the victim of a criminal cyber-security intrusion and that some customers’ cards were possibly compromised as a result.

We have begun to contain the intrusion and have taken significant steps to further enhance information security.

The security of our customers’ information is always a priority and we sincerely regret any inconvenience. We are taking steps, where possible, to notify customers whose cards we know were used fraudulently after making a purchase at our store.

I want to focus on one of Neiman Marcus’ statements:

We have … taken significant steps to further enhance information security.

Why do companies wait for a disaster to occur before making improvements that could have prevented the incident – saving the organization and its customers untold hours of lost productivity? Had Neiman Marcus taken these steps earlier,  the breach might not have occurred.  Or so we think.

Why do organizations wait until a security incident occurs before taking more aggressive steps to protect information?

  1. They don’t think it will happen to them. Often, an organization eyes a peer that suffered a breach and thinks, their security and operations are sloppy and they had it coming. But alas, those in an organization who think their security and operations are not sloppy are probably not familiar with their security and operations. In most organizations, security and systems are just barely good enough to get by. That’s human nature.
  2. Security costs too much. To them I say, “If you think prevention is expensive, have you priced incident response lately?”
  3. We’ll fix things later. Sure – only if someone is holding it over your head (like a payment processor pushing a merchant or service provider towards PCI compliance). That particular form of “later” never comes. Kicking the can down the road doesn’t solve the problem.

It is human nature to believe that another’s misfortunes can’t happen to us. Until it does.