While researching a project I’m working on, I found an interesting publication, National Bureau of Standards Special Publication 500-19, Audit and Evaluation of Computer Security. This publication contains most of the principles found in modern infosec management and control frameworks today. Indeed, looking into the details, one finds discussions of static and dynamic evaluation of computer programs and numerous other familiar topics.
Here’s the punchline. This document was published in 1977. Forty-five years ago.
The next time someone complains to you about having to deal with these “new” cybersecurity standards and practices, you can remind them that these standards and practices are older than most living people in the world today.
Historically and collectively, the COVID-19 pandemic was one of the most impactful events in a generation. Entire industries were uprooted, resulting in significant shifts in how and where people live and work. The work-from-home (WFH) phenomenon was wrenching for some, welcome by others, and transformational for all. Workers and companies adjusted and continued to operate as best as they could, and WFH became the new normal for entire industries and professions.
Return to the office (RTO) has been disruptive for companies and workers. Management in some organizations have insisted that personnel plan on working in offices part-time and full-time. We’ve seen the entire spectrum of compliance and non-compliance, and we’ve seen large organizations order a full- or part-time RTO and then backtrack when employees objected.
Workers are finding the transition from WFH to RTO nearly as disruptive in 2022 as WFH was in 2020. The routines established in WFH have become normal, routine, and comfortable. In many organizations, workers can choose whether to return to the office, continue to work from home, or adopt a hybrid arrangement.
WFH is probably here to stay. During the pandemic lockdown, many organizations began recruiting workers from wider geographic areas who live hundreds and even thousands of miles from workplaces. Organizations have discovered that they can compete for workers across larger areas. Workers have found that they can live almost anywhere and do their jobs effectively in full-time, permanent WFH arrangements.
It’s difficult to know whether a gradual shift back to in-office work will occur, or if work-from-home will be a permanent fixture in today’s workforce. Time will tell.
I’ve hired dozens of analysts and engineers in my career, and have always enjoyed the interview process to get to know prospective team members. And I have been interviewed plenty of times myself, so I’m familiar with the pressure we’re under to be hyper-focused on interviewers’ questions and comments, what is said, what is not said, and body language. Being interviewed is a welcome challenge and an exhilarating experience – although terrifying at times.
I have realized that, when being interviewed, my hyper-focus may not be revealing the real me. So when I interview candidates, I have a favorite question that I like to ask near the end of the interview. For instance, if the candidate’s name is Charles, I would ask,
“Charles, in this meeting, we’ve seen a lot of what I like to call ‘interview Charles,’ who is highly focused on the conversation and exerting a lot of mental energy to be sure the conversation goes well. What I’d like to know is this: how is everyday Charles different from interview Charles?”
Over the years, I’ve seen a wide range of responses. This question has stumped a few candidates, meaning they may lack self-awareness. Or, they might feel like I’m trying to pierce a sacred veil, to go beyond the persona on display to the real person beneath. But this is precisely the point: interviewees are often nervous, and nervousness shows itself in various ways: they talk too much, too little, or they guard what they may feel are personality flaws so that I see only the highly professional, analytical thinker.
In an interview, we strive to show only our best side. As a result, we verbally redact a great deal about our personality – the human side of us. But that human side is exactly who we want to find and know. After all, there is a real prospect of working with this person every day for perhaps many years. We want to be sure we know who we are hiring and whether we will like working with them.
A mythological explanation of the world states that the flat earth rests on the back of a giant turtle, which itself rests on the back of an even larger turtle. That turtle rests on a still larger turtle, and so on, forever.
Third-party risk management is like the epistemological stack of world turtles: each organization obtains goods and services from yet other organizations, and so on with no apparent end. All organizations are at least partly dependent upon others for goods or services essential for delivering goods or services to their customers.
So, where does it all end? Depending upon the industry and the criticality of individual goods or services, third party risk management generally vets critical vendors, and determines whether those vendors have effective third party risk management programs.
We’re all in this together.
— excerpt from an upcoming book on information security management
The need for an organization’s information security program to be business-aligned cannot be overstated. The lack of business alignment could be considered a program’s greatest failing if not corrected.
For the most part.
It is critical for an information security program to be aligned in terms of support of the organization’s overall goals, and to utilize existing mechanisms such as corporate governance and policy enforcement. However, if and where there is dysfunction in the organization in terms of culture (for instance, a casual attitude or checkbox approach towards security or privacy), the program may position itself deliberately out of phase with the organization to alter the culture and bring it to a better place. In another example, rather than be satisfied with what may be low organizational maturity, security program leaders may influence process maturity by example through the enactment of higher maturity processes and procedures.
As change agents, security leaders need to thoughtfully understand where alignment is beneficial and where influence is essential.
— excerpt from an upcoming book on information security management
Seattle, WA – April 26, 2022 – Author Peter H. Gregory has announced that his latest book, “The Art of Writing Technical Books,” has just been published. The book is available in paperback and electronic editions worldwide.
Peter H Gregory is a well-known author of tech books, including certification study guides for the world’s leading professional certifications in information security and privacy. He has authored over fifty books in the past twenty-three years, beginning with “Solaris Security.” He wrote this first book in 1998-1999 amid the dot-com boom when most servers on the Internet were powered by the Solaris operating system from Sun Microsystems and when internet security was just becoming a concern.
“I have wanted to write this book for many years,” cites Gregory. “I have mentored numerous aspiring authors and helped many get published. But until now, I only could converse with them and answer their questions. Everything I’ve helped others with is captured in this book.”
Gregory has long been passionate about helping aspiring writers break into the publishing profession. He has been instrumental in helping several accomplished professionals publish books for major publishing houses, including Sarah Perrot and Matthew Webster.
About Peter H Gregory
Peter H Gregory is a career information security, privacy, and technology professional and a former executive advisor and virtual CISO. He is the author of over fifty books on information security and emerging technology. Visit him at peterhgregory.com.
In past generations, families and businesses stocked up on essentials for that “rainy day” disruption, whatever it was. There was wisdom in that kind of thinking that was overrun in our generation.
Decades of peacetime, economic prosperity, the reliability of supply chains, and the lust for greater profits led to a “just in time” mentality and practice. Instead of stocking up on essentials, we rely on a steady influx of supplies – whatever they are – because we have gotten used to the reliability of the supply chain.
Just-in-time was driven by investors and accountants who found that organizations could eke out a bit more profit through not having unused inventory on the books. This is a trap we made for ourselves because we thought that nothing would ever go wrong.
Normalcy bias is what got us into this mess. And I do say “mess,” because it’s soon going to feel like one:
The global semiconductor shortage is bound to worsen, particularly when China attacks Taiwan, the source of most semiconductors in the world. This will result in short supplies and higher prices of everything with chips in them – worse than we are experiencing presently. We’re about to learn just how dependent we have become on information technology.
The shortage of truck drivers is precipitating the shortage of “everything else” – felt by consumers and businesses. Every one of us has experienced this personally.
There is an acute shortage of fertilizer in the world, due to rising natural gas prices. This means that there will be less food in this harvest year, resulting in food prices skyrocketing.
The resilient supply chains that took decades to build were taken down in years, and will take years to rebuild. But the shortage of everything will make even this a difficult task.
I believe we are about to experience shortages and price hikes like the world has not seen since World War II – but it’s likely to be worse than that, because supply chains are not just local, but global.
We are living in wartime – and this is going to change everything. Too few people, including those in charge, fully understand what this means.
Co-author Lawrence Miller and I completed this latest revision early in 2022. This revision covers the new CISSP Common Body of Knowledge that was updated in 2021. In addition to updates reflecting changes in the CBK, numerous other changes were made, reflecting advances and changes in cybersecurity practices, risks, threats, and regulations.
The publication of this 7th edition is a celebration of TWENTY YEARS of CISSP For Dummies. Larry and I wrote the first edition of CISSP For Dummies in 2002.
CISSP For Dummies is the only CISSP study guide approved by (ISC)2, the organization that manages the CISSP certification worldwide. This is a testimony to the quality and completeness that only CISSP For Dummies provides to security professionals who aspire to earn this prestigious certification.
This announcement would be incomplete without a grateful shoutout to Clément Dupuis, founder of CCCure, the well-known training organization for technology professionals. Clément passed in 2021, but not before providing valuable research material to Larry and me as we created this edition of the book.
The COVID-19 pandemic and working from home for many office workers have wrung the variety out of our lives. Many of us have found ourselves in a Groundhog Day scenario (referring to the movie) where our workdays are a nearly-identical blur:
The variety of our days is mostly gone:
Our commute (from the bedroom to the kitchen to the home-office-or-whatever) is the same: we don’t drive different routes, we don’t make any stops, we don’t experience the weather, we don’t see any scenery, and we don’t see any interesting people or things.
Our workday is more regimented: we have rigid schedules, we don’t run into people in the hall, we don’t have those impromptu, unplanned conversations, and we don’t see each other at lunch.
In short, our work lives have become quite dull – the same routine every day, with little prospect for change.
Here’s an observation from eight years of WFH, particularly since 2020 when we were sent home to work remotely for God-knows-how-long: we no longer look at each other in the eye. This may seem like a small thing, but it feels important to me: eye contact is the most intimate body language in an office conversation, vital because it keeps us honest and connected. In videoconferencing, we can look into the eyes of someone we’re talking with, but when we do so, they see us looking up (or down, if the webcam is at the bottom of our screen). Or, if we concentrate on looking into the webcam and its tiny green dot, we are not looking into the eyes of the person we are speaking with, even if they think we are. You could argue that the use of a smartphone makes this a little easier, but still: we are looking at a video representation of the person, not at the actual person. The result: we are not connected with our co-workers as we should be. The quality of our connected relationships suffers, as if we’re all holding back a little bit.
I don’t have the answers – I’m not a sociologist but a technologist. My observations are as a layperson who instinctively feels like something important is missing in our work-from-home, long-distance work relationships.
I’m going skiing today with my kids. This time of year, I relish the every-other-Friday mental health break of connecting with people and getting outside.
My first professional job was in the M.I.S. (Management Information Systems – what they used to call IT) department at Washoe County, Nevada. I had a variety of responsibilities, including mainframe computer operations, backup tape librarian, programmer, and creator of training materials. I had developed software to track the hundreds of 9” magtape reels in the backup tape library in a language called MAPPER.
MAPPER was a bit like Excel and a bit like MS-Access. One could develop a variety of “lists” and even mimick some of the characteristics of a relational database management system. Also, various input forms, query forms, and reports could be developed. I maintained some MAPPER-based applications and wrote the tape library system to track the inventory of backup tapes, some of which were on long-term retention, as long as seven years.
I was one of the first MAPPER programmers at Washoe County, and was asked to teach a course on MAPPER programming to the other programmers in the department (about ten in all). The course was a two-day, all-day course in the M.I.S. training room. I took the programmers through all of the basics, and had them develop their own little application to get some hands-on practice. Two such courses were completed, and they went pretty well.
My boss’s boss, Ralph Pratt, was the operations manager at Washoe County M.I.S. Being in my mid-twenties, I considered him an old, crusty dude who was grumpy most of the time, and I considered him mostly unapproachable. I was not much of a relationship builder in those days. Anyway, Ralph called me into his office one day. He told me that he would like me to teach a modified version of the MAPPER course to the Washoe County Commissioners, the elected officials who oversee all county operations. The prospect of teaching this course to the commissioners was exciting and terrifying to me.
Ralph asked me to shut the door to his office. He told me that we would practice what it would be like to teach the course to the commissioners, who were all very non-technical. This was the time before the IBM PC, so the commissioners had little keyboard experience.
Ralph instructed me to begin the first course segment in his office in a role-playing exercise. I began to speak, and in my first or second sentence, Ralph barked, “Stop. You used a technical term – they won’t understand it. Start over.”
I started again, got a bit further, and then, “Stop.” Same reason.
We discussed for a moment. Leave all technical terms behind, Ralph told me.
I tried again and got a bit further.
Ralph stopped me half a dozen times or more. Our session lasted thirty or forty minutes.
In retrospect, this was the most valuable thirty minutes of my entire career.
This was the beginning of what I now call being “bilingual,” in an unconventional sense. When I use the term “bilingual,” I’m referring to the ability to speak to technologists in technical terms, and to speak to businesspeople in non-technical terms. Over many years, I would hone this skill in training and public speaking events, eventually writing numerous books on technology. I needed to explain complex technical concepts in easily-understood terms. I’ve gained the reputation of doing this well.
It all goes back to Ralph Pratt. Thanks, Ralph, and may you rest in peace.
In the cybersecurity industry, there is a mistaken notion that a denial of service (DoS) attack only consists of flooding a target system to render it unavailable for legitimate uses. And while this indeed describes a DoS attack, there are other forms.
There is DoS’s big brother, distributed denial of service (DDoS), in which a large number of systems flood a target system to completely overwhelm it. But on the other end of the scale, a DoS attack can also consist of a single packet, which can be considerably more difficult to detect.
Let’s look at some examples of single packet DoS attacks, both new and old:
Zip bomb (CVE-2019-9674 and others). A specially formed ZIP archive that expands to exhaust system resources. The well-known 42.zip file expands to 4.5 petabytes of uncompressed data.
WinNuke (CVE-1999-0153). This attack on older versions of Windows sends out-of-band data to a target computer on TCP port 139 that contains an Urgent pointer, causing it to crash.
LAND (CVE-1999-0016). This attack sends a spoofed TCP SYN packet with the target host’s IP address as both source and destination. This causes the machine to reply to itself continuously.
Regular expression denial of service (ReDoS) (CVE-2021-23490, CVE-2021-45470, and others). This attacks a target system’s regular expression parser by providing a regular expression that takes a very long time to evaluate.
Refer to these sources if you are not familiar with Denial of Service:
I’ve recently cataloged many of the articles I’ve written over the past twenty years and posted them on a new static page on my website, entitled Articles. I’ve managed to preserve most of them by creating PDF’s. A few of them seem to be gone forever, although I haven’t given up entirely.
In c. 33 A.D., the Roman governor of Judea, Pontius Pilate, is famously known for asking, “What is truth?”
This is a question that many ask today, and in the realm of cybersecurity, there are answers. But before I wade into this topic, it’s first appropriate for me to cite a dictionary definition of the word truth:
1: the real facts about something : the things that are true
2: the quality or state of being true
3: a statement or idea that is true or accepted as true
In business, government, education, and military contexts, and when it comes to the information systems that we in cybersecurity are called to protect, the truth is the complete body of information in electronic and other forms, including business records, system and device configuration, documentation, and software.
Software, and the configuration of systems and devices, serve to record and retell the truth (e.g., business transactions, correspondence) and make that information available at a later time or in another form.
It is said that not all truth should be spoken aloud. In the context of information systems, this means that some truths (business records) require protection, as they are considered personal or sensitive. On the business side, organizations have intellectual property of various types, including patents, trademarks, trade secrets, financial records, human resource records, and other operational records. Organizations depend upon the protection and integrity of this information, as much of its existence enables organizations to continue operations in support of their mission and purpose. Much of the responsibility for this protection falls to cybersecurity professionals. However, it is also commonly accepted that all personnel have a part to play, primarily in relying on their professional judgment to ensure that information is handled properly and protected from attackers.
There is considerable information in electronic form about natural persons, and more is being created continuously. Examples include the personal financial records of individuals and other information about persons, including their health, sexual, religious, and political affiliations and preferences. The universal concept of privacy concerns the protection and proper use of such information. The protection part of privacy falls to cybersecurity professionals (and the rest of the workforce, as mentioned earlier) to ensure that truths about individuals are kept confidential. The proper use part of privacy concerns formally established statements (more truths, or in this case, assertions) describing set formal and appropriate uses of personal information.
Cybersecurity professionals’ mission is the protection of the truths as described above.
Professional associations in the cybersecurity industry have codes of ethics and conduct that guide professional behavior. The organization (ISC)² Code of Ethics includes these statements:
Tell the truth
Take care to be truthful
The ISACA Code of Professional Ethics includes these statements:
Serve in the interest of stakeholders in a lawful manner, while maintaining high standards of conduct and character…
Maintain the privacy and confidentiality of information obtained in the course of their activities unless disclosure is required by legal authority.
The InfraGard Code of Ethics includes these statements:
Serve in the interests of InfraGard and the general public in a diligent, loyal, and honest manner, and will not knowingly be a party to any illegal or improper activities.
Maintain confidentiality, and prevent the use for competitive advantage at the expense of other members, of information obtained in the course of my involvement with InfraGard…
These and other codes of ethics require cybersecurity and privacy professionals to tell the truth, and to protect the truth from unnecessary disclosure and improper use.
Absolute truth does exist. For the cybersecurity professional, we are expected to conduct ourselves with integrity (identifying and telling the truth) and seek to protect business and personal information (truths about organizations and natural persons). That is our mission.
My first website was on the air in 1996, created with a tool I no longer remember (it might have been HoTMetaL), and later with Dreamweaver. That website exists only on Archive.org, where most of it is preserved. While I’m not going to reveal its URL here, it’s possible that the right search terms might unearth it.
From then until now, I have created 458 blog postings on a wide variety of topics, including security, privacy, IT, Windows XP, Miata, and numerous short excerpts from my many published books.
I was active on Twitter for a few years; during that time, I created fewer blog entries. After leaving Twitter in 2019, I’ve resumed my normal pace of one to four entries per week. Similarly, I left FaceBook in 2014, and never had an Instagram account. My social media presence is limited to this blog and LinkedIn.
In the late 1990s, as I was pivoting my career from IT architecture and management to cybersecurity, I became a member of some new virtual communities within my employer’s organization. We had a loosely knit virtual security team that consisted of people in numerous departments who were all interested in cybersecurity. Every other Thursday, we joined an audio conference bridge to discuss relevant issues.
In 2001, we had a meeting scheduled with some outsiders – I don’t remember if they were with an outside vendor, or another group in the company (it doesn’t matter now). In the days leading up to this meeting, a few of us expressed concern about this meeting and how it would go. I thought about this and had an idea: before the conference call begins, let’s all open Microsoft NetMeeting so that we can send text messages to each other to discuss and control the verbal discussion.
The meeting backchannel was born.
During the call, there were a few key moments where our backchannel was valuable. In one, someone from the other party said something that was not true. In the back channel, someone typed something like, “He’s lying! Someone, please refute this now before he changes the subject!” Moments later, one of our team members spoke up and corrected the earlier speaker.
I’ve used backchannels consistently since that time, generally in situations where we are in conference with parties whose level of trust is unknown, and in situations where conflict is likely to arise. There were times when the use of a meeting backchannel was common – practically the default. Sales calls were a great example, particularly when there were many of us on a call, representing many company departments, including product development, operations, security, privacy, and legal. We could help each other rapidly and keep the flow of the conversation moving in the right direction.
Today, backchannels are the norm – in the circles I run in, anyhow. Depending upon the situation, we’ll discuss the backchannel first, but often it’s an unspoken arrangement. When I’m speaking in a meeting, I’ll keep an eye on a window where incoming private messages from another in the meeting might influence what I’m saying – this can be invaluable. In contentious situations where one of my managers is talking, sometimes I’ll drop a quick note such as “You’re doing great!” to give them the added confidence they might need in the moment.
Modern videoconferencing tools such as Zoom and Microsoft Teams include a chat feature, where participants can drop in URLs, images, and notes to supplement what they or others are saying. An advantage that Microsoft Teams has over Zoom is that participants can chat with others who are not in the meeting without switching to another tool. In Zoom, you can only chat with people in the meeting; if you need to chat with someone not in the meeting, you’ve got to use a different tool.
At times, I’ve witnessed (and participated in) what I call a “meeting within a meeting” in which a verbal/video dialogue is taking place, and underneath that, a parallel discussion ensues, sometimes in the same meeting’s chat, but more often in a separate chat channel with a subset of the meeting’s participants. Again, we’re tossing ideas, throwing hints, and encouraging those who are speaking, or about to be.
Generally, we do not acknowledge the presence of meeting backchannels. They are often covert, and knowledge of them could be perceived as individuals colluding to influence a conversation and thus, an outcome. But now and again, I’ll implicitly acknowledge a backchannel: in one recent conversation with several business leaders, one of my managers sent a few words to remind me of something. I verbally acknowledged the assistance: “And, in addition, my colleague Kate has reminded me that we also need to consider….” In this example, I’m describing the assistance that helped everyone on the call.
With modern videoconferencing and chat tools, it’s possible to have several texting channels operating at once. There is a real danger here: being poor multitaskers, we humans need to be mindful of where we are paying attention: as soon as we start reading a chat message, we tune out whoever is speaking audibly in the meeting. This happens quite a lot, actually, as I often hear in a meeting these six words:
Can you please repeat the question?
Backchannels really only work when meetings are virtual. In face-to-face meetings, it’s more difficult to hide the fact that one person is typing text messages to another who is also there in the room. It’s considerably more difficult to covertly guide an in-person conversation, since it’s usually necessary for someone to speak up. For instance, while one person is speaking, another recalls an important point that a third person needs to mention. The listener would have to interject: “I think that Jose has another example to describe here, specifically regarding that travel agent customer we met with last week.” This puts Jose on the spot, and the listener is hoping that Jose will understand and proceed correctly with no other help. This is how meetings used to flow: everyone had to pay close attention, take notes, and know when to speak up to make an important point. Backchannels are becoming a crutch, albeit a useful one.
For some, permanent work-from-home (WFH) status provides additional freedom, including where we choose to live. While I was consulting and was a full-time remote worker, for instance, we took this opportunity to move out of the city and into the country, where we enjoy lower real estate costs, fresh air, freedom from traffic and pollution, and small-town life.
There are, however, limits to the matter of where you may choose to live. In many cases, your choice of residency may impact not only your own tax status, but you may also be subjecting your employer to additional legal and financial obligations. The rule of thumb is this: if you stay more than 30 days in a location and work from there, you and your employer may become subject to employment law stipulations as well as taxation.
If you are contemplating relocating to another state or country, check with your employer and your tax advisor first, so that you will have no surprises later.
While I’ve been a privacy nerd since the early 2000s, lately I’ve found a few of my long-time practices have been defeating my attempts to fly under the radar. I have little to hide, but I don’t care to reveal to big tech everything that I do online. For this reason, I stopped using the Google Chrome browser many years ago, nor do I use Google Search. But something else escaped my scrutiny until lately.
I’ve been using the Google Translate browser extension for years, as it’s handy for – you know – translating websites in other languages into my native language. I reconsidered the T’s & C’s for Google Translate, and find that I’m revealing far too much of my personal business to Google. Depending upon the settings you select, Google Translate will send your entire browsing history to Google, and all of the content of websites you visit. If you are privacy-conscious like me and have switched to other browsers and search engines but continue to use the Google Translate extension, your privacy efforts may have been wasted.
I was around in the 1990s, and observed and participated in numerous initiatives intended to blunt the effects of Y2K. Yes, Y2K was real, but it was also probably the most hyped phenomenon of the decade. I saw boondoggle projects funded because of an alleged Y2K connection.
Fast forward twenty years. I’ve seen some articles suggesting that the year 2022 also represents a boundary condition that caught some organizations off-guard:
For decades, risk management frameworks have cited the same four risk treatment options: accept, mitigate, transfer, and avoid. There is, however, a fifth option that some organizations select: ignore the risk.
Ignoring a risk situation is a choice, although it is not considered a wise choice. Ignoring a risk means doing nothing about it, not even making a decision about it. It amounts to little more than pretending the risk does not exist. It’s off the books. It is not even added to a risk register for consideration, but it represents a risk situation nonetheless.
In some cases, such as for minimal risk items, this may be perfectly acceptable. A theft of a paperclip may simply be too small for consideration for a risk register. It would probably be wise to leave this off of a risk register unless there is a specific reason to add it. In some cases, listing minimal risk is very critical because compliance requirements dictate that specific risks be considered in risk evaluations. Developing the right level of detail for a risk register requires experience, listening to an organization’s culture, and striking the right balance.
Organizations without risk management programs may implicitly ignore all risks, or many of them at least. Organizations might also be practicing informal and maybe even reckless risk management—risk management by gut feel. Without a systematic framework for identifying risks, many are likely to go undiscovered. This practice could also be considered as ignoring risks through the implicit refusal to identify them and treat them properly.
Note that ignoring risk, particularly when governance requires that you manage it, is usually a violation of the principles of due diligence and due care. Many organizations can be legally charged with “willful negligence” if they have a duty to manage risk, and they simply don’t.
– excerpt from CRISC Certified in Risk and Information Systems Control All-In-One Exam Guide, to be published in early 2022