iOS Apps That Improve My Professional Life

I rarely write publicly about apps I use on my iPhone and iPad, but there are a few I have been raving about with my colleagues and clients lately.

1. Sleep Cycle

sc1This is a unique alarm clock that tracks my sleep quality. It uses the iPhone’s motion sensors to track whether I’m tossing and turning, or sleeping more soundly.  But my favorite feature is the intelligent wake-up alarm that gently wakes me up when my sleep is shallower.  How it works: say that I want my alarm to go off at 6:00am; sleep cycle will monitor my sleep “depth” for the 20 minutes (configurable) prior to 6am and awaken me when I’m almost awake anyway.  The result: since starting to use Sleep Cycle three months ago, I have not been shocked out of a deep sleep by my alarm. Instead, I’m nudged awake and, as the first thing I experience during the day, it makes for a better day.

2. Waze

w1This is the navigation app that sort of “crowdsources” traffic conditions and hazards. Like Google Maps, it speaks turn by turn directions to my destination. But where it’s different: Waze is aware of the speeds of other drivers on my route, and if traffic suddenly slows (on the interstate as well as on city streets and country roads), Waze will navigate me around it.

Another cool feature: if there is a hazard ahead (accident, vehicle in road, vehicle on shoulder, debris), Waze will warn me with remarkable accuracy.  How this works: as you are traveling your route, if you see one of these, two or three taps will mark the hazard and warn others behind you.  When the hazard is gone, you just tap “not there” and drivers behind you won’t be warned.

Another nice feature for me is that the turn by turn directions can be transmitted via Bluetooth to my so-equipped motorcycle helmet, with my iPhone in my jacket pocket. I get turn by turn directions in audio only, and I can still find my way to a new client location in nice weather when I’m motorcycling.

3. Uber

Perfect for my business travel. I really don’t need to say any more.

4. Urban Spoon

As a frus1equent business traveler, I’m often trying new restaurants and looking for something new for internal or client dinners. You can filter by cuisine, distance, rating, cost, or just randomly select someplace nearby if you are feeling adventurous.

The security breaches continue

As of Tuesday, September 2, 2014, Home Depot was the latest merchant to announce a potential security breach.

Any more, this means intruders have stolen credit card numbers from its POS (point of sale) systems. The details have yet to be revealed.

If there is any silver lining for Home Depot, it’s the likelihood that another large merchant will probably soon announce its own breach.  But one thing that’s going to be interesting with Home Depot is how they handle the breach, and whether their CEO, CIO, and CISO/CSO (if they have a CISO/CSO) manage to keep their jobs. Recall that Target’s CEO and CIO lost their jobs over the late 2013 Target breach.

Merchants are in trouble. Aging technologies, some related to the continued use of magnetic stripe credit cards, are making it easier for intruders to steal credit card numbers from merchant POS systems.  Chip-and-PIN cards are coming (they’ve been in Europe for years), but they will not make breaches like this a thing of the past; rather, organized criminal organizations, which have made a lot of money from recent break-ins, are developing more advanced technologies like the memory scraping malware that was allegedly used in the Target breach. You can be sure that there will be further improvements on the part of criminal organizations and their advanced malware.

A promising development is the practice of encrypting card numbers in the hardware of the card reader, instead of in the POS system software.  But even this is not wholly secure: companies that manufacture this hardware will themselves be attacked, in the hopes that intruders will be able to steal the secrets of this encryption and exploit it. In case this sounds like science fiction, remember the RSA breach that was very similar.

The cat-and-mouse game continues.

Internal Network Access: We’re Doing It Wrong

A fundamental design flaw in network design and access management gives malware an open door into organizations.

Run the information technology clock back to the early 1980s, when universities and businesses began implementing local area networks. We connected ThinNet or ThickNet cabling to our servers and workstations and built the first local area networks, using a number of framing technologies – primarily Ethernet.

By design, Ethernet is a shared medium technology, which means that all stations on a local area network are able to communicate freely with one another. Whether devices called “hubs” were used, or if stations were strung together like Christmas tree lights, the result was the same: a completely open network with no access restrictions at the network level.

Fast forward a few years, when network switches began to replace hubs. Networks were a little more efficient, but the access model was unchanged – and remains so to this day. The bottom line:

Every workstation has the ability to communicate with every other workstation on all protocols.

This is wrong. This principle of open internal networks goes against the grain of the most important access control principle: deny access except when explicitly required. With today’s internal networks, there is no denial at all!

What I’m not talking about here is the junction between workstation networks and data center networks. Many organizations have introduced access control, primarily in the form of firewalls, and less often in the form of user-level authentication, so that internal data centers and other server networks are no longer a part of the open workstation network. That represents real progress, although many organizations have not yet made this step. But this is not the central point of this article, so let’s get back to it.

There are two reasons why today’s internal networks should not be wide open like most are now. The first reason is that it facilitates internal resource sharing. Most organizations have policy that prohibits individual workstations from being used to share resources with others. For instance, users can set up file shares and also share their directly-connected printers to other users. The main reason this is not a great idea is that these internal workstations contribute to the Shadow-IT problem by becoming non-sanctioned resources.

The main objection to open internal networks is that they facilitate the lateral movement of malware and intruders. For fifteen years or more, tens of thousands of organizations have been compromised by malware that self-propagates through internal networks. Worms such as Code Red, Nimda, Slammer, and Blaster scan internal networks to find other opportunities to infect internal systems. Attackers who successfully install RATs (remote access Trojans) on victim computers can scan local networks to enumerate internal networks and select additional targets. Today’s internal networks are doing nothing to stop these techniques.

The model of wide-open access needs to be inverted, so that the following rules of network access are implemented:

  1. Workstations have no network access with each other.
  2. Workstations have access ONLY to servers and services as required.

This should be the new default; this precisely follows the access control principle of deny all except that which is specifically required.

Twenty years ago, this would have meant that all workstation traffic would need to traverse firewalls that would made pass or no-pass decisions. However, in my opinion, network switches themselves are the right place to enact this type of access control.

Assumption of Breach

A new way of thinking about security incident prevention and response, called Assumption of Breach, is leading security professionals to think differently about security incidents. Prior to assumption of breach, the popular mindset among security professionals was to prevent security breaches from occurring. With assumption of breach, security professionals adopt the mindset that one or more breaches have potentially occurred in their organizations, whether those breaches have been discovered or not.

In my opinion, this is a more realistic philosophy than prior ways of thinking. Adversaries wield advanced tools and techniques, and are often able to compromise networks with even advanced defenses. Assumption of breach also requires humility on the part of security managers and executives, who might otherwise believe that their networks are impenetrable.

– excerpt from CISSP Guide to Security Essentials, 2nd edition

For more information on the topic of Assumption of Breach: (free registration required)


Reno, Nevada

I am visiting Reno, Nevada after quite a long absence. I’m here to speak at a professional event on the topic of human factors and data security (in other words, why we continue to build unsecure systems).

My IT career started here in Reno, with on-campus jobs in computing (operations and software development), and later in local government, banking, and casino gaming. Each job built on the last, and I gained positions with greater responsibility, creativity, and rewards of various sorts.

ImageI buried my young son in Reno – it seems like many lifetimes ago. He was my first stop. Time is a great healer – you’ll have to trust me on this one, if you have recently suffered a big loss.

I looked up a couple of long-time friends, but waited until the last minute. They’re probably busy with their own lives today.

Done with my coffee stop and time to check in to my hotel. My talk is tonight, and then I’m back on the road tomorrow with other stops in the Pacific Northwest.

Recovery Capacity Objective: a new metric for BCP / DRP

Business continuity and disaster recovery planning professionals rely on well-known metrics that are used to drive planning of emergency operations procedures and continuity of operations procedures. These metrics are:

  • Maximum Tolerable Downtime (MTD) – this is an arbitrary time value that represents the greatest period of time that an organization is able to tolerate the outage of a critical process or system without sustaining permanent damage to the organization’s ongoing viability. The units of measure are typically days, but can be smaller (hours, minutes) or larger (weeks, months).
  • Recovery Point Objective (RPO) – this is a time value that represents the maximum potential data loss in a disaster situation. For example, if an organization backs up data for a key business process once per day, the RPO would be 24 hours. This should not be confused with recovery time objective.
  • Recovery Time Objective (RTO) – this is a time value that represents the maximum period of time that a business process or system would be incapacitated in the event of a disaster.  This is largely independent of recovery point objective, which is dependent on facilities that replicate key business data to another location, preserving it in case the primary location suffers a disaster that damages business data.
  • Recovery Consistency Objective (RCO) – expressed as a percentage, this represents the maximum loss of data consistency during a disaster. In complex, distributed systems, it may not be possible to perfectly synchronize all business records. When a disaster occurs, often there is some inconsistency found on a recovery site where some data is “fresher” than other data. Different organizations and industries will have varying tolerances for data consistency in a disaster situation.

In my research on the topic of business continuity planning and disaster recovery planning, I have come across a standard metric that represents the capacity for a recovery system to process business transactions, as compared to the primary system. In professional dealings I have encountered this topic many times.

A new metric is proposed that is used to establish and communicate a recovery objective that represents the capacity of a recovery system:

  • Recovery Capacity Objective (RCapO) – expressed as a percentage, this represents the capacity of a recovery process or system as compared to the primary process or system.

Arguments for this metric:

  • Awareness. The question of recovery system capacity is not consistently addressed within an organization or to the users of a process or system.
  • Consistency. The adoption of a standard metric on recovery system capacity will facilitate adoption of the metric.
  • Planning. The users of a process or system can reasonably anticipate business conditions should a business process or system suffer a disaster that results in the implementation of emergency response procedures.

Why wait for a security breach to improve security?

Neiman Marcus is the victim of a security breach. Neiman Marcus provided a statement to journalist Brian Krebs:

Neiman Marcus was informed by our credit card processor in mid-December of potentially unauthorised payment card activity that occurred following customer purchases at our Neiman Marcus Group stores.

We informed federal law enforcement agencies and are working actively with the U.S. Secret Service, the payment brands, our credit card processor, a leading investigations, intelligence and risk management firm, and a leading forensic firm to investigate the situation. On January 1st, the forensics firm discovered evidence that the company was the victim of a criminal cyber-security intrusion and that some customers’ cards were possibly compromised as a result.

We have begun to contain the intrusion and have taken significant steps to further enhance information security.

The security of our customers’ information is always a priority and we sincerely regret any inconvenience. We are taking steps, where possible, to notify customers whose cards we know were used fraudulently after making a purchase at our store.

I want to focus on one of Neiman Marcus’ statements:

We have … taken significant steps to further enhance information security.

Why do companies wait for a disaster to occur before making improvements that could have prevented the incident – saving the organization and its customers untold hours of lost productivity? Had Neiman Marcus taken these steps earlier,  the breach might not have occurred.  Or so we think.

Why do organizations wait until a security incident occurs before taking more aggressive steps to protect information?

  1. They don’t think it will happen to them. Often, an organization eyes a peer that suffered a breach and thinks, their security and operations are sloppy and they had it coming. But alas, those in an organization who think their security and operations are not sloppy are probably not familiar with their security and operations. In most organizations, security and systems are just barely good enough to get by. That’s human nature.
  2. Security costs too much. To them I say, “If you think prevention is expensive, have you priced incident response lately?”
  3. We’ll fix things later. Sure – only if someone is holding it over your head (like a payment processor pushing a merchant or service provider towards PCI compliance). That particular form of “later” never comes. Kicking the can down the road doesn’t solve the problem.

It is human nature to believe that another’s misfortunes can’t happen to us. Until it does.