Category Archives: Quotes and Excerpts

Trust, But Verify

Doveryay, no proveryay is the Russian pronunciation of “Trust, But Verify.” I often heard this (in English) spoken by, and about, President Ronald Reagan in the 1980s, referring to U.S. and Russian nuclear disarmament treaties. That Ronald Reagan turned this rhyming phrase back on the Russians was probably lost on most Americans. It certainly was on me.

In the cybersecurity, privacy, and information systems audit industries, we use this phrase often to depict the need for quality.

I say “quality” here for a reason. Security and privacy are really business quality issues. Security and privacy related defects in business processes and information systems are really quality issues.

Trust, but verify, appears in the opening paragraph in Chapter 3 of CIPM Certified Information Privacy Manager All-In-One Exam Guide that is to be published in May 2021. The draft manuscript is complete; my colleague, J Clark, has completed his technical review. What’s left is copy editing (about half done), page layout (not started), and proofing (not started). Lots of steps. The excerpt:

Trust but verify is a Russian proverb that is commonly used by privacy and cybersecurity industry professionals. The complexity of information processing and management, which includes layers of underlying business processes and information systems, invites seemingly minor changes that can bring disastrous consequences.”

Controlling Access to Information and Functions

Computer systems, databases, and storage and retrieval systems contain information that has some monetary or intrinsic value. After all, the organization that has acquired and set up the system has expended valuable resources to establish and operate the system. After undergoing this effort, one would think that the organization would wish to control who can access the information that it has collected and stored.

Access controls are used to control access to information and functions. In simplistic terms, the steps undertaken are something like this:

  1. Reliably identify the subject (e.g., the person, program, or system)
  2. Find out what object (e.g., information or function) the subject wishes to access
  3. Determine whether the subject is allowed to access the object
  4. Permit (or deny) the subject’s access to the object
  5. Repeat

The actual practice of access control is far more complex than these five steps. This is due primarily to the high-speed, automated, complex, and distributed nature of information systems. Even in simple environments, information often exists in many forms and locations, and yet these systems must somehow interact and quickly retrieve and render the desired information, without violating any access rules that are in place. These same systems must also be able to quickly distinguish “friendly” accesses from hostile and unfriendly attempts to access—or even alter—this same information.

The success of an access control system is completely dependent upon the effectiveness of the business processes that support it. User access provisioning, review, and revocation are key activities that ensure only authorized persons may have access to information and functions.

— excerpt from an upcoming textbook on information systems security

Demystifying UTM and NGF

You may be here to understand the difference between Unified Threat Management (UTM) and Next-Generation Firewalls (NGF).

Here’s the punch line: there really isn’t a difference. UTM and NGF are two marketing terms that have been developed to put a label on the advance of products designed to provide various protective capabilities. The two terms do represent a somewhat different point of view; let me explain.

UTM is the representation of products that began to combine previously-separate capabilities like anti-virus, anti-spam, web filtering, and so on. This was an answer to the fragmentation of different discrete products, each with its own small task.

NGF is the representation of firewall manufacturers who began to realize that they needed to incorporate many other types of threat-prevention capabilities into their firewalls, such as (you guessed it), anti-virus, anti-spam, web filtering, and so on.

UTM and NGF were different a few years ago, but as product makers from both ends filled in functionality, they met in a common middle where there’s no longer any practical difference.

  • sidebar from an upcoming book. Copyright (C) 2012 someone.

Threats

Threats.

Not just hypothetical ideas, but real: spam, malware, botnets, hackers, and organized crime. They want to own your systems, steal your data, and use your systems to attack tomorrow’s victims.

A generation ago, firewalls were enough for this. Today, alone, they hardly make a difference. Instead, a plethora of defenses are needed to repel the variety of attacks that bombarding every corporate network more rapid than the frenzied spattering of a Geiger counter next to a Chernobyl souvenir.

  • excerpt from an upcoming book (someone owns the copyright, but I can’t tell you who)

Classification of data center reliability

The Telecommunications Industry Association (TIA) released the TIA-942 Telecommunications Infrastructure Standards for Data Centers standard in 2005. The standard describes various aspects of data center design, including reliability. The standard describes four levels of reliability:

  • Tier I – Basic ReliabilityPower and cooling distribution are in a single path. There may or may not be a raised floor, UPS, or generator. All maintenance requires downtime.
  • Tier II – Redundant ComponentsPower is in a single path; there may be redundant components for cooling. Includes raised floor, UPS, and generator. Most maintenance requires downtime.
  • Tier III – Concurrently MaintainableIncludes multiple power and cooling paths, but with only one path active. Includes sufficient capacity to carry power and cooling load on one path while performing maintenance on the other path. Includes raised floor, UPS, and generator.
  • Tier IV – Fault TolerantIncludes multiple active power and cooling distribution paths. Includes redundant components, including UPS and generator. Includes raised floor.
Excerpt from CISA All-In-One Study Guide, 2nd edition

Taking a Wider View of Application Security

Bookmark This (opens in new window)

As a software developer, you have a lot to worry about when writing and testing your code. But if you faithfully use secure coding guidelines from the Open Web Application Security Project (OWASP), test your code with security tools, and conduct peer code reviews, then your application will be secure, giving you worry-free sleep at night.

Wrong.

OK, sorry about that. I put that trap there for you, but I didn’t really expect you to step into it. I want to help you expand your thinking about application security.

Read rest of article here (redirects to softwaremag.com)

Personal integrity

A thought on personal integrity, from my business manager and wife, Rebekah Gregory:

“Simply recognizing a problem does not qualify us to fix it. We should strive for personal integrity rather than cashing in on what is broken. In this way we cultivate genuine trust and become dependable leaders.”

Implementation of audit recommendations

The purpose of internal and external audits is to identify potential opportunities for making improvements in control objectives and control activities. The handoff point between the completion of the audit and the auditee’s assumption of control is in the portion of the audit report that contains findings and recommendations. These recommendations are the imperatives that the auditor recommends the auditee perform to improve the control environment.

Implementation of audit recommendations is the responsibility of the auditee. However, there is some sense of shared responsibility with the auditor, as the auditor seeks to understand the auditee’s business, so that the auditor can develop recommendations that can reasonably be undertaken and completed. In a productive auditor-auditee relationship, the auditor will develop recommendations using the fullest possible understanding of the auditee’s business environment, capabilities, and limitations, in essence saying, “here are my recommendations to you for reducing risk and improving controls.”  And the auditee, having worked with the auditor to understand his methodology and conclusions, and who has been understood by the auditor, will accept the recommendations and take full responsibility for them, in essence saying, “I accept your recommendations and will implement them.” This is the spirit and intent of the auditor-auditee partnership.

– from CISA Certified Information Systems Auditor All-In-One Study Guide – the last words written into the draft manuscript, completed a few hours after the last of the Fourth of July fireworks have burst in the night sky

Residual risk

Residual risk is like the dirt on the floor that cannot be picked up by the broom and dustpan. Rather than pursue residual risk down to the last iota, it is swept aside and will probably not be noticed.

– From CISA Certified Information Systems Auditor All-In-One Study Guide

Are more federal cybersecurity laws needed?

Bookmark This (opens in new window)

Someone I know recently sent me a Washington Post article about some proposed U.S. federal regulations on cybersecurity. The article was an attempt at fear-mongering over privacy concerns. As a cybersecurity professional and author on twenty books on cybersecurity and the technology of data communications, I’m qualified to comment on this article.

Federal regulation on cybersecurity is LONG overdue. Today, almost all of the 50 states have enacted cybersecurity laws, each different, most designed to protect the privacy of citizen data, and none of these state laws go nearly far enough to deal with the blatant irresponsibility on the part of many private corporations on protecting citizens’ data. The scourge of security breaches (such as the recent Heartland heist of ONE HUNDRED MILLION credit card numbers) are, in part, still occurring because private corporations are not doing enough to protect OUR DATA.

My most recent book on cybersecurity, which is to be published in May, opens in this way:

“If the Internet were a city street, I would not travel it in daylight,” laments a chief information security officer for a prestigious university.

The Internet is critical infrastructure for the world’s commerce. Cybercrime is escalating; once the domain of hackers and script kiddies, cyber-gangs and organized criminal organizations have discovered the business opportunities for extortion, embezzlement, and fraud that now surpasses income from illegal drug trafficking. Criminals are going for the gold, the information held in information systems that are often easily accessed anonymously from the Internet.

The information security industry is barely able to keep up. Cybercriminals and hackers always seem to be one step ahead, and new threats and vulnerabilities crop up at a rate that often exceeds our ability to continue protecting our most vital information and systems. Like other sectors in IT, security planners, analysts, engineers, and operators are expected to do more with less. Cybercriminals have never had it so good.

There are not enough good security professionals to go around. As a profession, information security in all its forms is relatively new. Fifty years ago there were perhaps a dozen information security professionals, and their jobs consisted primarily of making sure the doors were locked and that keys were issued only to personnel who had an established need for access. Today, whole sectors of commerce are doing virtually all of their business online, and other critical infrastructures such as public utilities are controlled online via the Internet. It’s hard to find something that’s not online these days. The rate of growth in the information security profession is falling way behind the rate of growth of critical information and infrastructures going online. This is making it all the more critical for today’s and tomorrow’s information security professionals to have a good understanding of the vast array of principles, practices, technologies, and tactics that are required to protect an organization’s assets.

What I have not mentioned in the book’s opening pages is that cybersecurity laws are inadequate. The security incidents of the recent past (and short-term future, I fear) are so severe that they may pose a far greater threat on our economy than the worldwide recession.

Case in point: considerable intelligence suggests that the likely culprit for the great Northeast Blackout of 2003 was not electric power system malfunctions, but computer hackers who are sponsored by the People’s Republic of China.  I have read some of the intelligence reports myself and they are highly credible. You can read a lengthy article in the National Journal about the outage here. A few years ago, I attended a confidential briefing by the U.S. Office of Naval Intelligence on state-sponsored Chinese hackers. The briefing described many cyberterrorism activities in details that I cannot describe here. I believe that the capabilities by those groups are probably far greater today than they were at the time of the briefing. The fact that these groups’ efforts have been so successful is because U.S. private companies are not required to adequately security their networks; they are not even required to disclose whether security incidents have occurred (except as required by a patchwork of U.S. state laws).

I do not know whether the specific legislation discussed in the Washington Post article is an attempt to federalize the laws present in many U.S. states, or whether this legislation has a different purpose.

Security standards that are enforceable by the rule of law are badly needed. No, they will not solve all of our cybersecurity problems overnight, but if crafted correctly they can be an important first step. Today we have good standards, but no private company is required to follow them. The result is lax security that leads to the epidemic of cybersecurity incidents, many of which you never hear about.

Security basics: definitions of threat, attack, and vulnerability

Often the terms threat, attack, and vulnerability are interchanged and misused. Each is defined here.

Definition of threat: the expressed potential for the occurrence of a harmful event such as an attack.

Definition of attack: an action taken against a target with the intention of doing harm.

Definition of vulnerability: a weakness that makes targets susceptible to an attack.

Excerpt from CISSP Guide to Security Essentials, chapter 10

Vulnerabilities, threats, and risk in a chess metaphor

Bookmark This (opens in new window)

Even for security professionals it’s sometimes tricky to properly think about the terms vulnerability, threat, risk, attack, and exploit.  It can be harder yet to describe these concepts to someone who is not a security professional.

In this excerpt from our upcoming book, Biometrics for Dummies, we explain these terms within the metaphor of a game of chess:

“Before we go any further, let’s look at the meaning of the terms threat, vulnerability and risk. Over the years we’ve found these terms to be used interchangeably and incorrectly. As with any industry jargon, these terms are tossed around and used by people who do fully understand their meaning, and by those who think they do — but don’t really.

* Vulnerability: a weakness in a system that may permit an attacker to compromise it.
* Threat: a potential activity that would, if it occurred, harm a system.
* Risk: the potential negative impact if a harmful event were to occur.

The terms vulnerability, threat, and risk can be visualized like this: Imagine a game of chess, where one player has a very weak position, and the other player has a very strong position. The player with the weak position is unable to protect his king — this is a vulnerability. The weak player’s king is vulnerable to attack – a position of high risk. The strong player has powerful pieces (such as a queen, bishops, and rooks) that are in low risk positions to easily capture the weak player’s king — this is a threat.

And while we’re at it, there are some other words we should discuss:

* Attack: the act of carrying out a threat with the intention of harming a system.
* Exploit (verb): the act of carrying out a threat against a specific vulnerability.
* Exploit (noun): a program, tool, or technique that can be used to attack a system.

Using the chess analogy again, the strong player could attack the weak player, exploiting his vulnerability to capture his king. The strong player’s method of attack would be known as his exploit against the weak, high-risk player.”

From Biometrics for Dummies

Shortage of qualified security professionals continues

Bookmark This (opens in new window)

Security is a topic of great interest to IT professionals, business management, and the general public.  The wide proliferation of private information among organizations in the 1980s led to public outcry and the passage of privacy laws.  The explosion of e-commerce in the 1990s resulted in the theft of hundreds of millions of credit card numbers in thousands of security incidents that continue to this day.  Identity theft has also skyrocketed, largely because many organizations collect and store personal information and do not adequately protect it.  Many countries have passed additional data security laws intended to tighten up security and also require the disclosure of security breaches.  Things have only marginally improved since then.

These developments have led to a severe shortage of qualified information and business security professionals who are able to properly apply security controls required by applicable laws and regulations.  These professionals also need to be able to seek, identify, and mitigate other risks that could negatively affect organizations that collect and use sensitive information.

Introduction to an upcoming academic textbook on business and computer security

The security professional can only protect something that he fully understands.

Excerpt from a book to be published in 2009

Bookmark This (opens in new window)

Top ten security concepts that business managers need to know

Bookmark This (opens in new window)

Whether they know it or not, business managers are responsible for information systems, functions, and data within their span of responsibility. In order to effectively manage these assets, business managers need to be familiar with several concepts. Familiarity with these concepts will help business managers better understand the implications of their decisions regarding their employers’ assets.

Some material excerpted from Biometrics For Dummies by Peter Gregory and Mike Simon.

Defense in depth

This concept specifies that two or more controls, ideally of different types, work in combination to protect assets. Each control provides some type of protection by itself, and together they offer greater protection. Examples of defense in depth include:

  • Castle. The ancients understood defense in depth and got it right. A loot of treasure or a beautiful princess are hidden in the innermost chambers of a castle that is protected by a moat, a moat monster (or possibly just a deterrent control in the form of a “Beware of moat monster” sign), a drawbridge, turrets for archers, high walls that are difficult to climb, inner courtyards with more gates, turrets, hostile terrain and so forth.
  • E-commerce data. An online merchant protects its valuable transaction data with firewalls, routers with ACLs, intrusion detection systems, system-level access controls, database-level access controls, acceptable use policies, audit logging, and encryption. Notice that some of these controls are preventive, while others are detective, deterrent, and administrative.

Least privilege

This concept states that personnel are to have only the permissions and authority that they require in order to perform their stated functions, and no more.

Least privilege applies equally to systems: programs, processes, and other objects should have access to only the data and other systems required. For example, applications should never be run as root or Administrator.

Fail open / Fail closed

Sometimes these mechanisms can fail, and when they do, they fail in one of two ways, which are:

  • Fail open. When a control fails open, this means that all events (authorized as well as unauthorized) are permitted. An example of a fail-open situation is the failure of a key-card or biometric-controlled door buzzer, resulting in a door that opens just by pushing on it. A fail open situation puts protected assets at risk because they can be accessed by any party.
  • Fail closed. In a fail closed situation, all events are blocked, including those that should be allowed. An example of a fail-closed situation is a failure of a key-card or biometric reader that prevents everyone from going through a protected doorway. A fail-closed situation disrupts business operations by preventing subjects from being able to access business assets or information needed to complete tasks.

Role based access control

In complex systems with many users, it can be tedious and time consuming to administer the access levels and privileges that individual users need. Many complex applications support the use of roles – abstract definitions or templates of predefined roles for common job descriptions. It works like this: an organization defines permissions for each role, and then assigns personnel to a role based upon their job title.

When changing business conditions require that people in a certain role have different privileges, only the role is changed, which results in the permissions for everyone assigned to the role being changed likewise. This is a far more scalable solution than administering permissions for each user.

Spoofing

Spoofing, in its most basic form, is the act of pretending to be something you’re not. There are many forms of spoofing, including:

  • Phishing. Those e-mails that pretend to originate from a financial institution or other organization that advises the recipient to “log in” to reset their credentials.
  • Spam. Often the vector for spreading malware (viruses, worms, and Trojan horses), spam messages often claim to be something they’re not.
  • Caller-ID spoofing. Several caller-id spoofing services provide a service that allows a caller to change their originating caller-ID number to any legitimate number.

CIA

At the most basic level, biometric systems need to be protected in three ways: confidentiality, integrity, and availability. Security professionals use the term CIA to denote these terms. Let’s look at these concepts more closely:

  • Confidentiality. Information must be protected from viewing by unauthorized parties and systems. While in many cases the biometric system is serving as part of the means to protect information in the organization, the biometric-related information itself must be protected from onlookers.
  • Integrity. The integrity of biometric-related devices, systems, and information must be maintained. All of the components in a biometric system must be protected from unauthorized tampering. This includes the biometric devices themselves, as well as the systems and software that make it all work. Any unauthorized modifications to a biometric system may render it ineffective.
  • Availability. The biometric system must be available for use at all times. If some condition or event makes the biometric system unavailable, then the assets that the biometric system protects may themselves be unavailable when they are needed. This could result in disruptions to business operations.

Social engineering

Clever and resourceful intruders know that the easiest way to reach a target is by the path of least resistance. If an intruder is unable to successfully penetrate the technical defenses of a system or facility, he may instead rely on some unwitting employee to help the intruder gain access. Some examples of social engineering include:

  • Tailgating. If an intruder is unable to enter a facility on his own, he can pretend to be an employee who has lost his key card (or index finger or eye) and follow an employee through a secured door. It’s especially effective if the intruder is carrying some heavy object like a computer monitor or box of books – an employee is more apt to help the intruder into the building.
  • Remote access. A clever intruder can make a series of phone calls to various people inside of an organization to get all of the pieces necessary to successfully log on to the corporate network. He can get the VPN URL from one employee, a user name from another, and get a password reset from the helpdesk if they do not sufficiently validate the identity of the “user”.
  • Loading Dock Entry. Many reasonably secure facilities will have a blind-spot when it comes to the loading dock. A good social engineer in a brown shirt and pants can often just walk in the back door with nothing but a clipboard.
  • Road Apple. The attacker leaves removable media lying around somewhere that it will get picked up, say near the door to the lobby. A curious employee picks up the media, takes it inside and plugs it in, whereupon it autoexecutes a Trojan or virus, granting the attacker access. This is especially effective if the media is of some value like a USB stick or an SD card.
  • Dumpster diving. Intruders can go through an organization’s trash in the hopes of finding discarded printouts, memos, and documents that contain enough information to con their way into a system or facility.

Change management

The number one cause of business interruptions, outages, and downtime is not technology, but people – people who make changes without fully understanding the implications of the change. Change management is the formal process of vetting every proposed change in a system prior to making the change. The steps in a change management process typically are:

1. Proposed change. Someone requests a change be made to a system. This change could be something as simple as a configuration change or as complicated as a software or operating system upgrade. The requested change should include: a) Description of the change; b) Business or operational justification for the change; c) Who will perform the change; d) When the change will be made; e) Impact of not doing the change; f) Risks associated with making the change; g) Backout plan in case the change is unsuccessful; h) Anticipated user impact (e.g. downtime while the change is made); i) Other systems affected by the change; j) Test results (hopefully the change was tested on a test environment).

2. Change review. The proposed change is circulated for review among all of the formal participants in the change management process.

3. Change approval. Participants in the process discuss the proposed changes in order to identify any other risks or impacts. Then they can decide whether the change can take place as-planned.

4. Change wrap-up. After the change is made, final recordkeeping can be filed, to record the successful implementation of the change.

Access management

Organizations with information systems and information assets accomplish tasks and meet business objectives through people. People access these information systems and assets to directly or indirectly manage or perform these tasks. People require access to the right functions and systems in order to accomplish this.

One of the leading causes of security breaches is the “inside job” – where someone with access to a system causes deliberate harm. Often this is done by persons who have left the organization but whose access rights are still active.

Access management is a formal discipline and process where all access requests to systems are managed. The process of requesting and granting access should follow these steps:

1. Formal request. An employee’s manager should make a formal access request that states explicitly what systems, functions, or information the employee should be able to access and perform.

2. Review. The system or information owner should review the request.

3. Approval. The system or information owner should approve the request if it is determined that the employee does require this access in order to perform his or her duties.

4. Fulfillment. The access administrator (who is a different person than the requestor, the manager, the approver, or the user) fulfills the access request.

5. Recordkeeping. The request, review, approval, and fulfillment are all recorded in an official log.

6. Audit. Periodically (every month to every six months), the access rights of all information systems must be reviewed to make sure that all persons who have access are still authorized to do so.

7. Termination. Whenever an employee leaves the organization, all access rights must be terminated within 24 hours – or sooner for highly sensitive applications and data.

One thought more than any other keeps me awake at nights… how much longer do I have?  We each owe a death; there are no exceptions.  But, O God, sometimes, the Green Mile seems so long.

The Green Mile

Bookmark This (opens in new window)

Software is data with a soul. Whereas data is inanimate, life can be breathed into software, and it will occupy a hardware body, controlling its movement and behavior.

Disaster recovery isn’t just for dummies

Bookmark This (opens in new window)

Disaster Recovery is not simply about Katrinas nor earthquakes nor 9/11 catastrophes. Sometimes, the focus on these monumental events could intimidate even the most committed IT manager from tackling Disaster Recovery Planning. Disaster Recovery is really about the ability to maintain business as usual – or as close to ‘as usual’ as is feasible and justifiable – whatever gets thrown at IT.

Read entire review

Find out more about the book, IT Disaster Recovery Planning for Dummies

The purpose for a criticality analysis

Bookmark This (opens in new window)

When the Maximum Tolerable Downtime (MTD), Recovery Point Objectives (RPO), and Recovery Time Objectives (RTO) targets have been established for each process, all of the processes can be compared to each other based upon these criteria. The point of the criticality analysis is to identify which processes in the organization are the most critical, based upon the objective measures that have been identified thus far in the Business Impact Assessment.

– From an upcoming book on data security

Security is not a part of the design of the Internet

The Internet was built without a government or master plan. It was also built without security as part of the central design. Our entire infrastructure is vulnerable because security was not designed in from the ground up.

Richard Clarke, National Conference for Security, Infrastructure, and CounterTerrorism, speaking at the Washington, D.C., Summit, April 18, 2000

Submit: Add to your del.icio.us Digg This Slashdot GotNews StumbledUpon Reddit