[BreachExchange] Mental Models & Security: Thinking Like a Hacker

Audrey McNeil audrey at riskbasedsecurity.com
Tue Jan 23 10:43:54 EST 2018


https://www.darkreading.com/threat-intelligence/mental-
models-and-security-thinking-like-a-hacker/a/d-id/1330780

In the world of information security, people are often told to "think like
a hacker." The problem is, if you think of a hacker within a very narrow
definition (e.g., someone who only breaks Web applications), it leads to a
counterproductive way of thinking and conducting business.

A little knowledge is a dangerous thing, not least because isolated facts
don't stand on their own very well. As legendary investor Charlie Munger
once said:

"Well, the first rule is that you can't really know anything if you just
remember isolated facts and try and bang 'em back. If the facts don't hang
together on a latticework of theory, you don't have them in a usable form.

"You've got to have models in your head. And you've got to array your
experience both vicarious and direct on this latticework of models. ...

"[You've] got to have multiple models because if you just have one or two
that you're using, the nature of human psychology is such that you'll
torture reality so that it fits your models, or at least you'll think it
does. …"

This is worth bearing in mind for security pros.

When we look at the thought process of a (competent) security professional,
it encompasses many mental models. These don't relate exclusively to
hacking or wider technology, but instead cover principles that have broader
applications.

Let's look at some general mental models and their security applications.

1. Inversion
Difficult problems are best solved when they are worked backward.
Researchers are great at inverting systems and technologies to illustrate
what the system architect should have avoided. In other words, it's not
enough to think about all the things that can be done to secure a system;
you should think about all the things that would leave a system insecure.

>From a defensive point of view, it means not just thinking about how to
achieve success, but also how failure would be managed.

2. Confirmation Bias
What people wish, they also believe. We see confirmation bias deeply rooted
in applications, systems, and even entire businesses. It's why two auditors
can assess the same system and arrive at vastly different conclusions
regarding its adequacy.

Confirmation bias is extremely dangerous from a defenders' perspective, and
it clouds judgment. This is something hackers take advantage of all the
time. People often fall for phishing emails because they believe they are
too clever to fall for one. Reality sets in after it's too late.

3. Circle of Competence
Most people have a thing that they're really good at. But if you test them
in something outside of this area, you may find that they're not
well-rounded. Worse, they may even be ignorant of their own ignorance.

When we examine security as a discipline, we realize it's not a monolithic
thing. It consists of countless areas of competence. A social engineer, for
example, has a specific skill set that differs from a researcher with
expertise in remotely gaining access to SCADA systems.

The number of tools in a tool belt isn't important. What's far more
important is knowing the boundaries of one's circle of competence.

Managers building security teams should evaluate the individuals in the
team and build the department's circle of competence. This can also help
identify where gaps are that must be filled.

4. Occam's Razor
Occam's razor can be summarized like this: "Among competing hypotheses, the
one with the fewest assumptions should be selected."

It's a principle of simplicity that's relevant to security on many levels.
Often hackers will use simple, tried-and-tested methods to compromise a
company's systems: the infected USB drive in the parking lot or the
perfectly crafted spearphishing email that purports to be from the finance
department.

While there are also complex and advanced attack avenues, these are not
likely to be used against most companies. By using Occam's razor, attackers
can often compromise targets faster and cheaper. The same principles can
and should be applied when securing organizations.

5. Second-Order Thinking
Second-order thinking means to consider that effects have effects. This
forces you to think long-term when considering what action to take. The
question to ask is, "If I do X, what will happen after that?"

It's easy in the security world to give first-order advice. For example,
keeping up to date with security patches is good advice. But without
second-order thinking, this can lead to poor decisions with unforeseen
consequences. It's vital that security professionals consider all
implications before executing. For example, "What impact will there be on
downstream systems if we upgrade the OS on machine X?"

6. Thought Experiments
A technique popularized by Albert Einstein, the thought experiment is a way
to logically carry out a test in one's own head that would be difficult or
impossible to perform in real life. In security, this is usually used
during "tabletop" exercises or when risk modeling. It can be extremely
effective when used in conjunction with other mental models.

The purpose isn't necessarily to reach a definitive conclusion but to
encourage challenging thoughts and to push people outside of their comfort
zones.

7. Probabilistic Thinking (Bayesian Updating)
The world is dominated by probabilistic outcomes, as distinguished from
deterministic ones. Although we cannot predict the future with great
certainty, we often subconsciously make decisions based on probabilities.
For example, when crossing the road, we believe there's a low risk of being
hit by a car. The risk exists, but if you've looked for traffic, you are
confident that you can cross.

The Bayesian method says that one should consider all prior relevant
probabilities and then incrementally update them as newer information
arrives. This method is especially productive given the fundamentally
nondeterministic world we experience: we must use both prior odds and new
information to arrive at our best decisions.

While there may not be a simple answer to what it means to "think like a
hacker," the use of mental models to build frameworks of thought can help
avoid the pitfalls associated with approaching every problem from the same
angle.

I've listed seven mental models here, some which you may already be
familiar with and others you could try. Please share any of your favorite
security and hacker mental models and problem-solving techniques in the
comments.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.riskbasedsecurity.com/pipermail/breachexchange/attachments/20180123/287123f7/attachment.html>


More information about the BreachExchange mailing list