[BreachExchange] Five reasons why organisations don’t detect a cyber breach
Destry Winant
destry at riskbasedsecurity.com
Fri Nov 17 23:14:31 EST 2017
http://www.technologyrecord.com/Article/userid/8228/five-reasons-why-organisations-dont-detect-a-cyber-breach-62021#.Wg96OFWnHrc
We frequently get contacted by organisations after they have
experienced a data breach. All too frequently the incident comes as a
complete shock, and the reason that they find out it because they are
contacted by a third party. We have compiled our top five reasons why
organisations don’t detect a cyber breach:
Organisations assume it won’t happen to them and consequently they
don’t have a plan
Far too many people assume that they will never experience a
cyberattack. Their assumption is that they aren’t interesting enough,
their data isn’t interesting enough, there are bigger and better
targets or, worst of all, they have security solutions in place which
means they are bullet proof. Many organisations pay incident response
planning lip service. They download a vanilla incident response plan
from the internet, file it in a folder, and feel good that they are
compliant with the latest or greatest cybersecurity framework.
This has to be one of the biggest failures we see within an
organisation. To assume that cyber breaches won’t happen to you is not
a good plan. Organised crime units will target businesses to gain
access to their systems, their data and their intellectual property.
In today’s digital world, data has value, and bad guys know this.
Disorganised threats, such as WannaCry and Petya/NotPetya have proven
that any type of system or organisation can be a target. WannaCry was
a ransomware based attack, that was relayed through mail systems,
delivered across insecure network segments and infected any system
that was unpatched and powered on. WannaCry is a great example of how
attackers can now monetise attacks by targeting any type of
organisation, whether large or small. If you wanted your data back,
your only choice was to restore from backup, or pay the ransom.
Organisations have to plan to be compromised. Through borderless
networking, encompassing the cloud, social and mobile platforms it is
almost impossible to keep data secure at all times.
Organisations should think about the different types of threats that
they may face and build incident response plans to address these
threats. Instead of simply putting these plans in a folder and
forgetting about them, organisations need to test the plans,
conducting table top exercises, and other assurance exercises to
ensure that the plans are effective and can be executed at a time of
need.
Organisations don’t fully know where their data is
Many organisations think that they know where their data is. However,
one of the biggest differences between data and physical records is
that data can be in more than one place at the same time. It is common
for an organisation to focus on where they think the data is,
protecting this with security controls and processes. However, they
are frequently unaware where data has leaked to within a network.
Today’s systems are often virtualised, and these files are often
snapshotted or replicated across drive arrays and sites. Additionally,
it is not uncommon for systems administrators to take backups of key
system data when they do upgrades or install new applications. These
backups, and snapshots often find their way on to file shares, users
machines and even in to cloud infrastructure delivered by some of the
internet giants.
Organisations that don’t know where all of their data is located, will
struggle to defend it. They will struggle to detect attacks that
target it, and they will be unable to respond when incidents occur.
For an organisation to be able to detect and respond to a cyberattack,
they must have a full and complete picture of where their sensitive
data resides in advance.
Organisations expect that technology alone will detect a breach
Many organisations have gone out and bought IDS, IPS and SIEM
technology in the hope that technology alone will detect threats
within their network. Although technology can most certainly assist an
organisation, it needs to be configured and placed in the right places
to give it any chance of seeing threats as they target organisations.
Many security vendors build their security technology to detect the
noisiest of network traffic, and are completely unable to identify
malicious traffic that is disguised as normal internal traffic. SIEM
appliances are frequently over-tuned to try to eliminate false
positives, and in doing so also miss the essential indicators that
could point to a threat actor being present within your network.
Technology alone isn’t enough to detect and respond. It requires human
intervention and some form of process to act on alerts and to
undertake a counter-action when faced with a possible breach of
security. All too often, alerts are ignored. They are auto archived,
or filed away for a future day, in the hope that a systems
administrator will have more time, more focus and more resources to
fully assess them and determine the next steps.
For organisations to be able to detect cyberattacks, it is no use to
rely on technology alone. Organisations need a combination of robust
people, process and technology that is well placed, well trained, and
aligned to the current threat landscape.
The organisation has never done any assurance on their detection
Many years ago, when firewalls were first deployed, organisations
trusted that they were secure and assumed that they wouldn’t be
breached. Over months and years, mature organisations recognised that
they should probably test the security of the firewalls to ensure that
they were doing the filtering in the way that they expected. Through
firewall audits and external penetration tests, so organisations
gained assurance that their firewalls were delivering the value that
they expected.
In the world of cyber detection, this assurance activity is frequently
lacking. Organisations assume that they will be able to detect,
because they have bought technology and deployed it at strategic
vantage points across their networks. All too often this detection
technology, and any surrounding people or process involved in the
detection process undergoes no form of assurance activity at all. We
think this is wrong, and we are actively evangelising about the need
for change.
Organisations need to understand the threat landscape and conduct
threat modelling, to understand the likely attack paths and the
relevant tools, techniques and practices that threats have been seen
to display. Detection and response assessments should then be
conducted to simulate these threats and determine an organisations
detection capability. And of course, this shouldn’t just focus on
internal systems. Organisations need to have confidence that they have
the right kind of tools and processes in place to detect attacks on
the cloud services that they consume.
Organisations assume that the threat will look like it's from an external source
Many organisations monitor the external infrastructure effectively,
and have poor detection capability internally. The expectation is that
the threat is external and will look like malicious traffic.
The reality today, is that many threat actors target people, your
employees and your colleagues. They compromise their machines through
phishing techniques and then move laterally across the network abusing
the inherent trust controls that you have built in to your systems
your services and your processes. More and more attacks don’t look
like external threats. They look like internal users, accessing
systems and services in an abnormal manner. If you are not monitoring
the internal networks, and you have no ability to detect normal from
abnormal user behaviour, then it will be really hard to detect many of
the more common current threats.
More information about the BreachExchange
mailing list