What if you're doing security wrong?
Maybe information security is about more than just protecting information
The job of the information security organization at your company is to protect the company's information right? That seems to be the most common definition, with Cisco saying InfoSec is "processes and tools designed and deployed to protect sensitive business information," and similar phrasing from Imperva and others. Even the U.S. Bureau of Labor Statstics says "Information security analysts plan and carry out security measures to protect an organization’s computer networks and systems."
But what if that's wrong? Or if not wrong, perhaps fundamentally incomplete?
Ask yourself a simple question: why does an organization pay money to protect its information? Most people's impulsive answer is something like "because information is valuable" or "because breaches are bad for business". But while those answers aren't wrong, they're also too shallow.
How does the company decide how much to pay to protect any asset? It's not based purely on the value (perceived or real) of that asset, though that's a factor. It's ultimately based on the perception of the risk. Risk that the asset disappears, through theft or destruction. Risk that the asset is devalued, through damage or loss of exclusivity. And so on. Essentially, paying to protect an asset only makes sense when it improves the overall risk to the business or purpose of that organization.
But no organization strives for zero risk, or even the lowest possible risk. So the organization figures out (often informally) how much risk it can tolerate, and that guides its decision on how much of its resources to spend reducing risk to bring it into line with that tolerance.
Rethinking the goal
I propose that the objective of an information security organization is to manage the risk posed by and to information assets and the systems that handle them, so that it conforms to the risk tolerance of the organization. To the extent that InfoSec protects anything, it should do so because it's the best way to achieve that objective.
This implies a few interesting things right away:
- risk visibility is essential; you can't manage what you can't measure, so to manage organizational risks, an InfoSec team needs line-of-sight to relevant risks. Not just the security-specific risks, but any organizational risks that they may interact with.
- reducing risk isn't always good; once the risk is tolerable, you should accept it. Spending resources to reduce it further is almost always wasteful, and waste is an organizational risk. Much of an InfoSec team's work will still be reduction of risk. But "it reduces risk" is not a sufficient justification for taking an action.
- prioritization of activities should consider efficiency; an identified security risk is risk the organization bears, but so is the cost of remediating or mitigating it, the cost of establishing and operating controls, and so on. The highest priority actions proposed by InfoSec teams need to be those that most efficiently move the total risk toward the organization's risk tolerance.
- time is a critical factor; a properly-operating InfoSec program should treat "within risk tolerance" as a state of normal operation. Exceeding that risk tolerance is, at minimum, abnormal. And may occasionally be an emergency. MTBF (Mean Time Between Failures) is measured as time between events that cause you to leave normal operation, not time between breaches. Likewise, MTTR (Mean Time To Repair) is measured as time it takes to return to normal operation overall, not time it takes to execute specific risk-treatment actions. It's essential to keeping the overall risk within tolerance to manage both MTBF and MTTR.
- controls can't pose more risk than they treat; when implementing controls, InfoSec organizations must consider all of the risks involved in deploying and operating them – including monetary and opportunity costs, risks to essential business functions, and so on. The "safest" option may not produce the best overall risk posture for the organization. An excellent example is deciding whether a control should "fail closed" or "fail open"; there are many cases where the risk of the control failing closed exceeds the risk realized should it fail open, that are missed if the goal is merely protecting information.
An overall focus on developing InfoSec programs to align with organizational risk tolerance — rather than protecting the organization as a "defender" — changes the game. In some ways, it's subtle; in others, it's fundamental. And I encourage anyone involved, at any level, in operating an InfoSec program to consider carefully what your job is. Do you just protect information, or do you help the organization succeed by managing its risks effectively and efficiently?