Tags: Cohesity

Cyber resilience – Hardening the last line of defence

cyber

Share this content

Facebook
Twitter
LinkedIn

If the backup and restoration of data worked reliably, the number of ransomware cases would be lower – making the business model of ransomware less profitable. With technical and organisational key best practices, any backup environment can be hardened. However, some elementary design flaws do not allow for absolute cyber resilience.

In an ideal world, life in IT would be so easy. After a ransomware attack, which encrypts all production systems, the IT teams react and restore the applications and data from the backups, while the forensic experts investigate the attack and identify the weak spot and the vulnerability. In this ideal scenario, the blackmail attempt amounts to nothing.

But here’s the reality: the bad actors in the cyber industry typically examine the network structure and the backup system in the first few days after successfully penetrating the victim’s network. Their aim is to find weaknesses and undermine the last line of defence. The result: Cryptocurrency analytics firm Chainanalysis reported that approximately $1.3 billion in ransomware payments were paid globally over the past two years, which is a significant increase compared to 2019’s $152 million.

With that in mind, every company can enforce technical and organisational best practices to protect themselves from some of the weaknesses.

Trivial, but essential – passwords and privileges

The main administrator of the backup infrastructure quite often owns the most access privileges within an organisation, because, by definition, the backup tool must be able to access all important production systems and access data there. If saboteurs manage to compromise this key account, all production data will be at risk in one fell swoop. For reasons of convenience, it is often possible to manage this superuser and his/her privileges and passwords via central user directories such as the Active Directory.

It is important to separate these two worlds, so that users are only listed in the respective backup and disaster recovery environment. The power users should receive user profiles that follow the principle of ‘least privilege’, i.e., as many access rights as necessary and as few as possible. Access to those accounts should of course be secured by multi-factor authentication.

Many hardware systems are used to record the backup data, especially in large environments. Only authorised personnel should have physical access to this, otherwise a saboteur could destroy the hardware and thus the data. Modern providers use remote maintenance tools to access the systems securely and, for example, install a new boot image or access the hard disks directly. It should be possible to upload a new image on-the-fly during operations including rollback with a simple mouse click to quickly activate new functions or install bug fixes.

Secure the flow of communication and detect anomalies early

In order to back up data, the backup systems must be able to talk to each other and to the data sources. This requires opening ports on firewalls. To increase the level of security, this traffic should be carried over isolated physical or logical networks.

The IT teams should understand exactly what protocols and services are being used across the network for this purpose. For example, some providers use less secure variants of protocols such as SNMPv2 for administrative tasks, which should be replaced by SNMPv3. If you use a new version, you should definitely use SHA and AES as authentication algorithms, as it is more secure than MD5.

The data between the data source and backup should be encrypted – during the transfer and at the destination. If you want to achieve high cyber resilience, you should also insist on the principle of immutability (immutable backups), which must be integrated. This is critical in order to help ensure that data cannot be changed, encrypted, or deleted. This makes immutable backups one of the best ways to combat ransomware, because the original backup largely remains inaccessible.

Overcome organisational conflicts and isolated legacy tools

Security operations and IT infrastructure teams live in two worlds that are often separated by design. While the SecOps teams want to regulate all access as strictly as possible, the IT infrastructure teams must be allowed to access all important systems for backup. 

It is not surprising that many of these teams are not collaborating as effectively as possible to address growing cyber threats, as a recent survey found out. Those respondents who believe collaboration is weak between IT and security, nearly half of respondents believe their organisation is more exposed to cyber threats as a result. For true cyber resilience these teams must work closely together, as the high number of successful attacks proves that attack vectors are changing and it’s not just about defence, but backup and recovery. 

And this is where it becomes clear that even with all the best practices, basic backup design flaws cannot be fixed unless teams are willing to modernise the infrastructure across the board. A research found out that a large percentage of companies globally (nearly 50%) are relying on backup and recovery infrastructure designed in or before 2010, long before today’s multi-cloud era and onslaught of sophisticated cyberattacks plaguing enterprises globally.

If organisations want to achieve real cyber resilience and successfully recover critical data even during an attack, they will have to modernise their backup and disaster recovery infrastructure and migrate to modern approaches such as a next-gen data management platform.

The data itself should be brought together to take the Zero Trust model even further: in a centralised data management platform based on a hyper converged file system that scales up. In addition to the strict access rules and multi-factor authentication, the platform should generate immutable snapshots that cannot be changed by any external application or unauthorised user.

Next-gen data management like Cohesity also uses AI-supported analysis of its backup snapshots to identify indications of possible anomalies. These can be passed on to higher-level security tools from vendors such as Cisco or Palo Alto Networks, in order to examine the potential incident in more detail.

Any delay in ransomware response and recovery can lead to increased downtime and increased data loss. Integrating both worlds of SecOps and IT can help link data management and data security processes more effectively. It’s the key to stay ahead of ransomware attacks and to reinforcing an organisation’s cyber resiliency.

Brian Spanswick, Chief Information Security Officer and Head of IT at Cohesity 

For more information, visit: www.cohesity.com

Newsletter
Receive the latest breaking news straight to your inbox