Category Archives: Risks

Clean out your contact lists

I recently watched Rob Braxman on the security of encrypted messaging apps like Signal and WhatsApp. In his video, Rob pointed out that many apps access our contact lists and build webs of associations. Even though the cryptography protecting message contents is generally effective, it may be possible for law enforcement and intelligence agencies to know the identity of a person’s connections.

Let’s dig deeper.

If a law enforcement agency considers you a person of interest, they may discover that you use encrypted messaging apps like Signal. While law enforcement will not be able to easily view the contents of your conversations, they will be able to see with whom you are conversing.

Image courtesy Aussie Broadband

Also, the appearance of using an encrypted messaging app could suggest that you have something to hide.

Let’s look at this from a different perspective. Consider an active law enforcement investigation focusing on a particular person. If you are in the person’s contact list, and if that person is known to be communicating with you on an encrypted service, then you may become another person of interest in the investigation.

I watched Rob’s video twice, and then I recalled something I see in Signal often: when someone in my contact list installs Signal, I get a notification from Signal that the contact is using the app. I recently noticed that I frequently do not recognize the contact’s name, and I dismiss the notification. I’ve had this occur dozens of times this year.

Then it hit me: I have been collecting contacts for decades, and they’re stored in multiple services (primarily, Yahoo and Google). In previous jobs, I’ve had associations with numerous clients, partners, vendors, co-workers, and other associates, resulting in an accumulation of thousands of contacts, most of whom I barely know.

Last week, I found it difficult to rationalize keeping all of these contacts and purged them. In Google alone, I had well over one-thousand contacts. After spending time last weekend deleting extraneous contacts, I’m down to about three hundred, and I might go back through them and remove many more.

Encrypted apps and your association with contacts are not the only risks related to maintaining a long contact list. Another issue is this: if someone breaks into any of my services where I keep many contacts, I don’t want people getting Joe Job and other attacks made possible through contact harvesting.

Until recently, I didn’t consider my accumulated contacts a liability, but I do now.

In my day job, one of my responsibilities includes leading numerous programs, including data governance, which includes data classification and data retention. And, having been a QSA for many years, the concept of data-as-asset and data-as-liability are clear to me. For instance, retaining credit card data after a transaction has been completed may provide value to an organization. Still, it also presents itself as a liability: if that stored card data is compromised, the consequences may significantly outweigh its benefit.  Somehow, I didn’t apply this concept to personal contact data. Thanks again to Rob Braxman for nudging me to realize that contact data can be just as toxic as other forms of sensitive information.

Postscript: think about this in another way: would you want others you worked with in the past to remove you from their contact lists?

Backups – the apparently forgotten craft

At the dawn of my career, I worked in two different old-school computer mainframe operations organizations. We spent a considerable amount of time (I’m estimating 20%) doing backups. Sure, computers were a lot slower then, and we had a lot less data.

We did backups for a reason: things happen. All kinds of things, like hardware failures, software bugs, accidents, mistakes, small and large disasters, and more. I can recall numerous times when we had to recover individual files, complete databases, and ground-up (“bare metal”) recoveries to get things going again.

We didn’t wait for these scenarios to occur to see whether we could do any of these types of restores. We practiced, regularly. In one mainframe shop early in my career, we completely restored our OS every Sunday night. Okay, this was in part a storage defragmentation measure and performed mainly for this purpose. However, we were still doing a bare metal restoration, precisely like what we would do if our data center burned down and we had to recover our data and applications on a different system.

Was this exciting work? Certainly not.

Interesting? Not in the least.

Essential? Absolutely.

So what am I getting at here? Am I merely reminiscing about the good old days? Hardly.

I’m talking about ransomware. At times, it’s difficult for me to sympathize with the organizations that are victims of ransomware. It’s hard for me to rationalize why an organization would even remotely consider paying a ransom (particularly when the FBI reported that only about half of organizations would be able to decrypt their data when they paid the ransom) (sorry, I cannot find the link to that advisory, I’ll keep looking and update this article when I find it).

A survey by Kaspersky indicated some facts that shocked me:

  • 37 percent of respondents were unable to accurately define ransomware let alone understand the damage it can deliver.
  • Of those survey respondents who suffered a ransomware attack, 40 percent said they would not know the immediate steps to take in response.

I’m amazed by these results. Do IT organizations no longer understand IT operations fundamentals that have been around for decades? I hate to sound harsh, but if this is the case, organizations deserve the consequences they experience when ransomware (or human error, or software bugs, etc.) strikes.

That said…. I am acutely aware that it can be difficult to find good IT help these days. However, if an organization is crippled by ransomware, they’ve already gone “all-in” with information technology, but neglected to implement common safeguards like data backup.

(image courtesy recordnations.com)

CISOs are not risk owners

Many organizations have implicitly adopted the mistaken notion that the chief information security officer (CISO) is the de facto risk owner for all identified cyber risk matters.

In a properly run risk management program, risk owners are business unit leaders and department heads who own the business activity where a risk has been identified. For instance, if a risk is identified regarding the long-term storage of full credit card numbers in an e-commerce environment, the risk owner would be the executive who runs the e-commerce function. That executive would decide to mitigate, avoid, transfer, or accept that risk.

The role of the CISO is to operate the risk management program and facilitate discussions and risk treatment decisions, but not make those risk treatment decisions. A CISO can be considered a risk facilitator, but not a risk owner.

Even when embraced and practiced, this concept does not always stop an organization from sacking the CISO should a breach occur. A dismissal might even be appropriate, for example, if the risk management program that the CISO operated was not performing as expected.

— excerpt from CRISC Certified in Risk and Information Systems ControlTM All-In-One Exam Guide, 2nd edition.

In terms of cybersecurity and ransomware, most organizations are anti-vaxxers

Prologue: There are many opinions and points of view with regards to the origin and nature of COVID, response to the pandemic (or plandemic if you prefer) and vaccinations. I’m not here to express any opinion, but will borrow from these events as I briefly use vaccinations as a metaphor. And thanks for my former colleague Jason Popp for coining the phrase that I’m borrowing.

In a comment to a LinkedIn post about ransomware, Jason said, “If ransomware is a pandemic, then most organizations are anti-vaxxers.”

Brilliant.

I’ll state this another way: the tools and techniques for ransomware prevention have been around for decades. Decades. By and large, organizations hit with ransomware are not employing these techniques effectively, if at all. Implicitly, most organizations choose not to employ the safeguards that would prevent most ransomware attacks.

Why? Good question. Perhaps it’s normalcy bias. Or that cybersecurity is too expensive, or inconvenient to users, or that it’s too hard to find good cyber persons. Or, cybersecurity is a distraction from the organization’s mission (and ransomware isn’t?).

Ransomware presents several challenges. First, most companies that pay ransoms still don’t get their data back. And, more recently, the U.S. Treasury department Office of Foreign Assets Control (OFAC) has cited that paying ransoms to cybercriminals is a violation of OFAC laws.

The solution? Perform or commission a risk assessment. Hire cybersecurity professionals who knows how to fix deficiencies and manage effective security governance, operations and response.

Or, just stop using computers.

The Fifth Option in Risk Treatment

For decades, risk management frameworks have cited the same four risk treatment options: accept, mitigate, transfer, and avoid. There is, however, a fifth option that some organizations select: ignore the risk.

Ignoring a risk is a choice, although it is not considered a wise choice.  Ignoring a risk means doing nothing about it – not even making a decision about it. It amounts to little more than pretending the risk does not exist. It’s off the books.

Organizations without risk management programs may be implicitly ignoring all risks, or many of them at least. Organizations might also be practicing informal and maybe even reckless risk management – risk management by gut feel. Without a systematic framework for identifying risks, many are likely to go undiscovered. This could also be considered ignoring risks through the implicit refusal to identify them and treat them properly.

  • excerpt from an upcoming book on risk management

What I Was Doing On 9/11/2001

In 2001, I was the security strategist for a national wireless telecommunications company. I usually awoke early to read the news online, and on September 11 I was in my home office shortly after 5:00am Pacific Time.  I was perusing corporate e-mail and browsing the news, when I saw a story of a plane crashing into a building in New York.

I had a television in the home office, and I reached over to turn it on. I tuned to CNN and watched as smoke poured from one of the two towers in the background, as two commentators droned on about what this could be about. While watching this I saw the second airliner emerge from the background and crash into the second tower.

Like many, I thought I was watching a video loop of the first crash, but soon realized I was watching live TV.

I e-mailed and IM’d members of our national security team to get them aware of these developments. Before 6am Pacific time, we had our national emergency conference bridge up and running (and it would stay on all day). Very soon we understood the gravity of the situation, and wondered what would happen next.  We were a nation under attack and needed to take steps to protect our business.  Within minutes we had initiated a nationwide lockdown (I cannot divulge details on what that means), and over the next several hours we took more steps to protect the company.

——–

Since being a teen-ager I had a particular interest in World War Two. My father was a bombardier instructor, and his business partner and best friend was a highly decorated air ace.

——–

We are under attack and we are at war, I thought to myself early that morning, and while I don’t remember specifics about our national conference bridge, I’m certain that I or someone else on the bridge said as much.  Like the sneak attack on Pearl Harbor on December 7, 1941, we all believed that the 9/11 attacks could have been the opening salvos of a much larger plan. Thankfully that was not the case. But in the moment, there was no way to know for sure.

For many days, I and probably a lot of Americans expected more things to happen. The fact that they didn’t was both a surprise and a relief.

Will 2016 Be The Year Of The Board?

This year has exploded out of the gate, starting on Jan 4 (the first business day of the year) with a flurry of activity. Sure, some of this is just new budget money that is available. However, I’m seeing a lot of organizations in my part of the world (California, Oregon, Washington, Idaho, Montana, Alberta, British Columbia, and Alaska) asking for help on the topic of communicating to executive management and the board of directors.

It’s about time.

Really, though, this makes sense.  Boards of directors aren’t interested in fads in business management. They rely upon their tried-and-true methods of managing businesses through board meetings, audit and risk committees, and meetings with executives. Until recently, board members perceived information security as a tactical matter not requiring their attention. However, with so many organizations suffering from colossal breaches, board members are starting to ask questions, which is a step in the right direction.

Let me say this again. Board members’ asking questions is a big sign of progress. And it doesn’t matter, mostly, what those questions are. It’s a sign they are thinking about information security, perhaps for the first time. And they’re bold enough to ask questions, even if they fear they are asking stupid questions.

The National Association of Corporate Directors (NACD) has an excellent publication on the topic of boards of directors attention on information security, called the Cyber Risk Oversight Handbook. Last I checked, a soft copy is free. Whether you are a board member or an infosec staffer, I highly recommend this for your reading list in early 2016.

In air travel and data security, there are no guarantees of absolute safety

The recent tragic GermanWings crash has illustrated an important point: even the best designed safety systems can be defeated in scenarios where a trusted individual decides to go rogue.

In the case of the GermanWings crash, the co-pilot was able to lock the pilot out of the cockpit. The cockpit door locking mechanism is designed to enable a trusted individual inside the cockpit from preventing an unwanted person from being able to enter.

Such safeguards exist in security mechanisms in information systems. However, these safeguards only work when those at the controls are competent. If they go rogue, there is little, if anything, that can be done to slow or stop their actions. Any administrator with responsibilities and privileges for maintaining software, operating systems, databases, or networks has near-absolute control over those objects. If they decide to go rogue, at best the security mechanisms will record their malevolent actions, just as the cockpit voice recorder documented the pilot’s attempts to re-enter the cockpit, as well as the co-pilot’s breathing, indicating he was still alive.

Remember that technology – even protective controls – cannot know the intent of the operator. Technology, the amplifier of a person’s will, blindly obeys.

Don’t let it happen to you

This is a time of year when we reflect on our personal and professional lives, and think about the coming years and what we want to accomplish. I’ve been thinking about this over the past couple of days… yesterday, an important news story about the 2013 Target security breach was published. The article states that Judge Paul A. Magnuson of the Minnesota District Court has ruled that Target was negligent in the massive 2013 holiday shopping season data breach. As such, banks and other financial institutions can pursue compensation via class-action lawsuits. Judge Magnuson said, “Although the third-party hackers’ activities caused harm, Target played a key role in allowing the harm to occur.” I have provided a link to the article at the end of this message.

Clearly, this is really bad news for Target. This legal ruling may have a chilling effect on other merchant and retail organizations.

I don’t want you to experience what Target is going through. I changed jobs at the beginning of 2014 to help as many organizations as possible avoid major breaches that could cause irreparable damage. If you have a security supplier and service provider that is helping you, great. If you fear that your organization may be in the news someday because you know your security is deficient (or you just don’t know), we can help in many ways.

I hope you have a joyous holiday season, and that you start 2015 without living in fear.

http://www.infosecurity-magazine.com/news/target-ruled-negligent-in-holiday/

Internal Network Access: We’re Doing It Wrong

A fundamental design flaw in network design and access management gives malware an open door into organizations.

Run the information technology clock back to the early 1980s, when universities and businesses began implementing local area networks. We connected ThinNet or ThickNet cabling to our servers and workstations and built the first local area networks, using a number of framing technologies – primarily Ethernet.

By design, Ethernet is a shared medium technology, which means that all stations on a local area network are able to communicate freely with one another. Whether devices called “hubs” were used, or if stations were strung together like Christmas tree lights, the result was the same: a completely open network with no access restrictions at the network level.

Fast forward a few years, when network switches began to replace hubs. Networks were a little more efficient, but the access model was unchanged – and remains so to this day. The bottom line:

Every workstation has the ability to communicate with every other workstation on all protocols.

This is wrong. This principle of open internal networks goes against the grain of the most important access control principle: deny access except when explicitly required. With today’s internal networks, there is no denial at all!

What I’m not talking about here is the junction between workstation networks and data center networks. Many organizations have introduced access control, primarily in the form of firewalls, and less often in the form of user-level authentication, so that internal data centers and other server networks are no longer a part of the open workstation network. That represents real progress, although many organizations have not yet made this step. But this is not the central point of this article, so let’s get back to it.

There are two reasons why today’s internal networks should not be wide open like most are now. The first reason is that it facilitates internal resource sharing. Most organizations have policy that prohibits individual workstations from being used to share resources with others. For instance, users can set up file shares and also share their directly-connected printers to other users. The main reason this is not a great idea is that these internal workstations contribute to the Shadow-IT problem by becoming non-sanctioned resources.

The main objection to open internal networks is that they facilitate the lateral movement of malware and intruders. For fifteen years or more, tens of thousands of organizations have been compromised by malware that self-propagates through internal networks. Worms such as Code Red, Nimda, Slammer, and Blaster scan internal networks to find other opportunities to infect internal systems. Attackers who successfully install RATs (remote access Trojans) on victim computers can scan local networks to enumerate internal networks and select additional targets. Today’s internal networks are doing nothing to stop these techniques.

The model of wide-open access needs to be inverted, so that the following rules of network access are implemented:

  1. Workstations have no network access with each other.
  2. Workstations have access ONLY to servers and services as required.

This should be the new default; this precisely follows the access control principle of deny all except that which is specifically required.

Twenty years ago, this would have meant that all workstation traffic would need to traverse firewalls that would made pass or no-pass decisions. However, in my opinion, network switches themselves are the right place to enact this type of access control.

Recovery Capacity Objective: a new metric for BCP / DRP

Business continuity and disaster recovery planning professionals rely on well-known metrics that are used to drive planning of emergency operations procedures and continuity of operations procedures. These metrics are:

  • Maximum Tolerable Downtime (MTD) – this is an arbitrary time value that represents the greatest period of time that an organization is able to tolerate the outage of a critical process or system without sustaining permanent damage to the organization’s ongoing viability. The units of measure are typically days, but can be smaller (hours, minutes) or larger (weeks, months).
  • Recovery Point Objective (RPO) – this is a time value that represents the maximum potential data loss in a disaster situation. For example, if an organization backs up data for a key business process once per day, the RPO would be 24 hours. This should not be confused with recovery time objective.
  • Recovery Time Objective (RTO) – this is a time value that represents the maximum period of time that a business process or system would be incapacitated in the event of a disaster.  This is largely independent of recovery point objective, which is dependent on facilities that replicate key business data to another location, preserving it in case the primary location suffers a disaster that damages business data.
  • Recovery Consistency Objective (RCO) – expressed as a percentage, this represents the maximum loss of data consistency during a disaster. In complex, distributed systems, it may not be possible to perfectly synchronize all business records. When a disaster occurs, often there is some inconsistency found on a recovery site where some data is “fresher” than other data. Different organizations and industries will have varying tolerances for data consistency in a disaster situation.

In my research on the topic of business continuity planning and disaster recovery planning, I have come across a standard metric that represents the capacity for a recovery system to process business transactions, as compared to the primary system. In professional dealings I have encountered this topic many times.

A new metric is proposed that is used to establish and communicate a recovery objective that represents the capacity of a recovery system:

  • Recovery Capacity Objective (RCapO) – expressed as a percentage, this represents the capacity of a recovery process or system as compared to the primary process or system.

Arguments for this metric:

  • Awareness. The question of recovery system capacity is not consistently addressed within an organization or to the users of a process or system.
  • Consistency. The adoption of a standard metric on recovery system capacity will facilitate adoption of the metric.
  • Planning. The users of a process or system can reasonably anticipate business conditions should a business process or system suffer a disaster that results in the implementation of emergency response procedures.

Why wait for a security breach to improve security?

Neiman Marcus is the victim of a security breach. Neiman Marcus provided a statement to journalist Brian Krebs:

Neiman Marcus was informed by our credit card processor in mid-December of potentially unauthorised payment card activity that occurred following customer purchases at our Neiman Marcus Group stores.

We informed federal law enforcement agencies and are working actively with the U.S. Secret Service, the payment brands, our credit card processor, a leading investigations, intelligence and risk management firm, and a leading forensic firm to investigate the situation. On January 1st, the forensics firm discovered evidence that the company was the victim of a criminal cyber-security intrusion and that some customers’ cards were possibly compromised as a result.

We have begun to contain the intrusion and have taken significant steps to further enhance information security.

The security of our customers’ information is always a priority and we sincerely regret any inconvenience. We are taking steps, where possible, to notify customers whose cards we know were used fraudulently after making a purchase at our store.

I want to focus on one of Neiman Marcus’ statements:

We have … taken significant steps to further enhance information security.

Why do companies wait for a disaster to occur before making improvements that could have prevented the incident – saving the organization and its customers untold hours of lost productivity? Had Neiman Marcus taken these steps earlier,  the breach might not have occurred.  Or so we think.

Why do organizations wait until a security incident occurs before taking more aggressive steps to protect information?

  1. They don’t think it will happen to them. Often, an organization eyes a peer that suffered a breach and thinks, their security and operations are sloppy and they had it coming. But alas, those in an organization who think their security and operations are not sloppy are probably not familiar with their security and operations. In most organizations, security and systems are just barely good enough to get by. That’s human nature.
  2. Security costs too much. To them I say, “If you think prevention is expensive, have you priced incident response lately?”
  3. We’ll fix things later. Sure – only if someone is holding it over your head (like a payment processor pushing a merchant or service provider towards PCI compliance). That particular form of “later” never comes. Kicking the can down the road doesn’t solve the problem.

It is human nature to believe that another’s misfortunes can’t happen to us. Until it does.

Why there will always be security breaches

At the time of this writing, the Target breach is in the news, and the magnitude of the Target breach has jumped from 40 million to as high as 110 million.

More recently, we’re now hearing about a breach of Neiman Marcus.

Of course, another retailer will be the next victim.  It is not so important to know who that will be, but why.

Retailers are like herds of gazelles on the African plain, and cybercriminals are the lions who devour them.

As lions stalk their prey, sometimes they choose their victim early and target them. At other times, lions run into the herd and find a target of opportunity: one that is a little slower than the rest, or one that makes a mistake and becomes more vulnerable. The slow, sick ones are easy targets, but the healthy, fatter ones are more rewarding targets.

As long as their are lions and gazelles, there will always be victims.

As long as there are retailers that store, process, or transmit valuable data, there will always be cybercriminals that attempt to steal that data.

Protect your Black Friday and Cyber Monday shopping with a quick PC tune-up

Before embarking on online shopping trips, it’s worth the few minutes required to make sure your computer does not enable the theft of your identity.

Tens of thousands will have their identities stolen in the next few weeks, because malware was able to help steal valuable information from you such as credit card numbers, online userids and passwords. A few minutes work will go a long way towards preventing this.

That, or you can do nothing, and potentially have to take days off of work to cancel credit cards, write letters, get credit monitoring, and get back to where you are right now with perhaps forty hours’ work.

It’s up to you.

Ready?

1. On your PC, connect to http://update.microsoft.com/ .  Go through the steps required to check that all necessary security patches are installed.

Note: If you are able to connect to Internet sites but are unable to successfully install updates at update.microsoft.com, your PC may already be compromised. If so, it is important that you seek professional help immediately to rid your computer of malware. Delays may be very costly in the long run.

2. To eliminate the need to periodically visit update.microsoft.com, confirm that Automatic Updates are properly set. Use one of the following links for detailed instructions (all are Microsoft articles that open in a new window):

Windows XP | Windows Vista | Windows 7 | Windows 8 (automatic updates are turned on by default)

Note: If you are unable to successfully turn on Automatic Updates, your PC may already be compromised. If so, it is important that you seek professional help immediately to rid your computer of malware. Delays may be very costly in the long run.

3. Ensure that your PC has working anti-virus software. If you know how to find it, make sure that it has downloaded updates in the last few days. Try doing an update now – your anti-virus software should be able to successfully connect and check for new updates. If your Internet connection is working but your anti-virus software is unable to check for updates, it is likely that your PC is already compromised.

Note: if any of the following conditions are true, it is important that you seek professional help immediately to make sure your computer is protected from malware.

a. You cannot find your anti-virus program

b. Your anti-virus program cannot successfully check for updates

c. Your anti-virus program does not seem to be working properly

Several free anti-virus programs are worthy of consideration: AVGAvastZone Alarm Free Antivirus + FirewallPanda Cloud Anti-VirusI cannot stress enough the need for every PC user to have a healthy, working, properly configured anti-virus program on their computer at all times.

LinkedIn’s “Intro” So Toxic It Could Dramatically Change BYOD

LinkedIn’s new “Intro” iOS app directs all e-mail sent or received on an iOS device through LinkedIn’s servers.

Yes, you’ve got that right.

Even so-called “secure” e-mail.

Even corporate e-mail.

Has LinkedIn been acquired by the NSA?  Sorry, bad joke, poor taste – but I couldn’t resist. It crossed my mind.

BYOD implications

So what’s this to do with BYOD?

Many organizations are still sitting on the sidelines with regards to BYOD. They are passively permitting their employees to use iOS devices (and Androids, Windows phones too) to send and receive corporate e-mail, mostly on unmanaged, personally owned devices. This means that organizations that presently permit their employees to send and receive e-mail using personally owned iOS devices are at risk of all of that e-mail to be read (and retained) by LinkedIn, by every employee that downloads and installs the LinkedIn “Intro” app.

LinkedIn talks about this as “doing the impossible.”  I’d prefer to call it “doing the unthinkable.”

Organizations without MDM (mobile device management) are powerless in preventing this, for the most part.

Every cloud has a silver lining.

This move by LinkedIn may finally get a lot of organizations off the fence in terms of BYOD, but employees might not be happy.  Organizations’ legal departments are going to be having aneurisms right and left when they learn about this, and they may insist that corporate IT establish immediate control over personally owned mobile devices to block the LinkedIn Intro app.

Corporate legal departments usually get their way on important legal matters. This is one of those situations. When Legal realizes that LinkedIn Intro could destroy attorney-client privilege, Legal may march straight to the CIO and demand immediate cessation. That is, once you peel the Legal team off the ceiling.

Nothing like a crisis and reckless abandon by a formerly trusted service provider to get people moving.

This article does a good job of explaining the evils of LinkedIn Intro.

My respect for LinkedIn could not be at a lower point if they publicly admitted that they were sending your content to the government.

Invisible Adversaries

“On the Internet, adversaries can’t be seen. Everyone knows they’re out there because the airwaves are full of news about new breaches and break-ins. There are adversaries and victims, but connecting the dots in between is often difficult. Adversaries are willing to take great risks because catching them, or even knowing who they are, is difficult.”

— excerpt from an upcoming book on zero-day threats

Challenges to endpoint security

Managing security on endpoint systems, even in mature organizations, is especially difficult, in part because the shape of the attack surface is different on every individual endpoint. Users often have the ability to change at least some of their endpoints’ security settings, as well as install software programs and browser plug-ins. The only safe conclusion that a security manager can arrive at is that many endpoints in their organizations are easily exploitable.

— excerpt from an upcoming book on stopping zero-day exploits

A perspective on endpoint security

I’m preparing a webinar on endpoint security and was thinking about the problem while on a flight to Chicago. Consider it this way: in an organization with 10,000 employees, you’ve got your servers in the data center managed by IT. But you’ve got 10,000 more machines with the same – or more – complication than those servers. These machines also have access to sensitive data and often store it themselves.

But there’s more. Those 10,000 machines are managed by non-technical people who have the same system-level privileges as trained and certified system engineers. And not only that, but those 10,000 machines are not in your data center but out of your physical control, often operating in external environments away from really important network security controls such as firewalls, data leakage prevention, command & control detection, and intrusion prevention systems.  These are our endpoint systems. Is it any wonder we are living in the era of colossal security breaches?

This is insanity, but there is a way out.

New Christmas computer, part 3: data backup

You’re a few days into your new computer and you are getting used to how it works. Probably you’ve figured out where all of the controls are located in terms of look-and-feel, so by now you’ve been able to personalize it (wallpaper, colors, mouse/touchpad behavior, and so on) and make it “yours.”

If you save data locally in your computer, whether it’s photos, documents, or data related to a local application, the longer you keep your data just on your computer, the more at risk you are of losing your data should something go wrong.

There are a lot of ways to lose your data, and if your data is at all important to you, then you should sit up and pay attention. Sooner or later, you’ll find some or all of your data has gone missing. A few of the ways this can happen are:

  • Stolen computer
  • Hard drive failure (this can happen to SSD’s too – you’re not immune)
  • Operating system  or application malfunction
  • User error

Rather than describe specific tools or services, instead I’ll describe the general methods that can be used to back up your data, as well as some principles that will help you make good decisions about how to back up your data.

Backing up your data means simply this: making a copy of it, from time to time, for safe keeping, in case something goes wrong.

Methods

Available methods for backing up your data include:

  • Copying to a thumb drive, external hard drive, or a CD or DVD
  • Copying over your network to another home (or office) computer
  • Time Machine, if you use a Mac
  • Copying to a cloud-based storage provider such as Mozy, Box.net, Dropbox, or iBackup.
  • Copying to backup media such as magnetic tape

Location

Let’s talk about the location of your backup data. This matters a lot, and you have some choices to make here. The two main choices for location are:

  • Near you and your computer. When you keep your backup data close by, you will be able to conveniently recover any data that you might lose for most any reason: accidental deletion, updates you wish you didn’t make, hardware failures in your computer, or software bugs that helped to corrupt your files. Do be aware, though, that if you only keep your backup data near your computer, then certain events such as fires, floods, or theft may result in your computer and your backup data being lost. For this reason you will want to consider keeping your backup data away from you and your computer.
  • Far away from you and your computer. When you keep your backup data far away from you, it may be slightly less convenient to recover data, but the main advantage is that most types of disasters that may happen to you (fire, flood, theft) is not likely to affect both your original data and your backup data.

One thing to keep in mind is that you don’t have to make an OR-type decision about where to keep your backup data. There is no reason why you cannot do both: keep backup data nearby, and keep backup data far away. This will help to protect you from events like accidental erasure as well as disasters like fires or floods.

Out of your control

One important matter to keep in mind is this: if you are considering use of any of those cloud-based data storage services, then you have to understand the risk of another type of data loss: a compromise of the security used by the data storage service, which could lead to your data being exposed to others. Many cloud based data storage services describe mechanisms such as encryption that they use to protect customer data. But what may seem like solid protection may instead only be window dressing. Depending on the sensitivity of the data you are considering keeping with a cloud-based storage service, you might consider encrypting it yourself before copying it to the storage provider, or you might consider using a different storage provider. Unless you are an expert in data security, you may need to consult with a security expert who may be able to better understand the effectiveness of a storage provider’s claims of safety and security.

My own methods

Being Mac users, we use Time Machine for automatic incremental backups that occur on an hourly, daily, and weekly basis. I rotate two different external hard drives for my Time Machine backups, and usually keep the other hard drive in a safe or take it to work. I also use an Internet based data backup service and regularly back up my most important current information to the service (such as book manuscripts I am currently working on).

Part 2: anti-virus

New Christmas computer, part 2: anti-virus

You are savoring your new PC and visiting your usual haunts: Facebook, Netflix, Hulu, and more.

But if this new PC does not have anti-virus, a firewall, and other precautions, the glitter will soon be gone, and you’ll soon wonder why the problems you’re having in 2013 are related to that new PC.

New machines are a good time to develop new habits. Sure, there’s a little trouble now, but you’ll save hours of grief later.  Think of this as the moments required to fasten the seat belt in your car and perhaps a bit of discomfort – but compare that to the pain and expense of injuries incurred in even a minor crash if you weren’t wearing it. Minor decisions now can have major consequences later.

Habit #2: Install and configure anti-virus

While many new computers come with anti-virus software, often it’s a limited “trial” version from one of the popular brands such as Symantec, McAfee, or Trend Micro. If you don’t mind shelling out $40 or more for a year (or more) of anti-virus protection, go ahead and do so now before you forget. Granted, most of these trial versions are aggressively “in your face” about converting your trial version into a full purchased version.  Caution: if you get into the habit of dismissing the “your trial version is about to run out!” messages, you run the risk of turning a blind eye when your trial anti-virus is no longer protecting you.  Better do it now!

If your computer did not come with anti-virus software, I suggest you make that the first order of business. There are many reputable brands of anti-virus available today, available online or from computer and electronics stores. For basic virus (and Trojan, worms, key loggers, etc.), all of the main brands of anti-virus are very similar.

My personal preference for anti-virus programs (in order) are:

  1. Kaspersky
  2. Sophos
  3. AVG
  4. Norton
  5. McAfee
  6. Panda
  7. Trend Micro

Note: if selecting, installing, and configuring anti-virus seems to be beyond your ability, consult with the store where you purchased your computer, or contact a trusted advisor who is knowledgable on the topic.

Key configuration points when using anti-virus:

  • “Real time” scanning – the anti-virus program examines activity on your computer continuously and blocks any malware that attempts to install itself.
  • Signature updates – the anti-virus program should check at least once each day for new updates, to block the latest viruses from infecting your computer.
  • Periodic whole disk scans – it is a good idea to scan your hard drive at least once a week. If you keep your computer on all the time, schedule the scan to take place when you are not using the computer, as a scan can slow down your computer.
  • Safe Internet usage – many anti-virus programs contain a feature that will try to warn you or steer you away from sites that are known to be harmful.

Many anti-virus programs also come with a firewall and other tools. Some of these may be useful as well – consult your computer retailer or a trusted advisor to see what’s right for you.

Part 1: password security

Part 3: data backup