Inadequate cloud logs are proving a headache for CISOs | Computer Weekly


The mass adoption of cloud environments has helped businesses transform their operations through scalability and cost efficiency, but it has also put additional strain on security professionals. Research shows almost two-thirds of security analysts say the size of the attack surface has increased in the past three years, driven by pandemic-triggered digital and cloud investments. Security professionals face an ever-growing ‘spiral of more’ that sees them analysing more security alerts, setting more rules, and working with more security tools. This all increases workload, and makes it harder to identify and respond quickly to security alerts and manage breaches.

But aside from the growing attack surface, the quality and efficacy of cloud logs themselves is exacerbating challenges for over-burdened security analysts. With cloud technology evolving so quickly, the logs provided are still nascent, limiting visibility into cloud environments. This is making life more challenging for security practitioners, increasing their workload and the likelihood of a breach.

Ultimately, organisations need to find a way to improve their visibility into cloud environments to stay secure while maintaining compliance, and driving operational efficiency.

Teething problems

In some cases, cloud logs are currently falling short and opening organisations up to security risks. For example, our research team at Vectra recently discovered a new Azure exploit using CSV and log injection to attack administrators and gain access to admin privileges. If successful, threat actors could grant themselves access to any resources in the compromised environment, siphon off sensitive data, deploy ransomware, or sell access to the breached organisation onto ransomware gangs. This could have disastrous consequences for an organisation, from a loss of customer trust to regulatory fines – all impacting the bottom line and a firms’ market share.

Aside from security vulnerabilities, there are other log-related issues that impact cloud visibility, adding to the workload of analysts and putting organisations at further risk of a breach. These include:

Inconsistencies in User IDs and IPs – a lack of consistent data formatting in logs can make it difficult for analysts to establish a clear picture during security events. Even the most minor change in how an IP address is written, or a username is presented, can create a correlation nightmare. This results in added workload as analysts must invest extra time connecting disparate data points, potentially leading to delays when responding to incidents.

More frequent communications on outages – while providers like Microsoft are generally good at notifying users of outages, more visibility and control is needed within logs to help analysts track log flow, and prevent unauthorised or accidental log disabling. If not, it’s extremely difficult to tell if there’s an outage preventing cloud logs from coming through, or if logs have been disabled by an insider.

Delays in log event availability – log events are essential for notifying analysts to urgent changes that might need to be made to keep the cloud environment secure. But delays in these announcements can put organisations at risk, as threat actors are able to exploit vulnerabilities and security gaps within 30 minutes or less. Without enough notice, analysts may not have enough time to analyse the situation, so they could overlook critical indicators of compromise and leave themselves exposed to threats.

Cloud providers must act to give defenders the edge

It’s clear that, while the cloud brings a raft of efficiency benefits, there is some work to be done to improve visibility into these new environments. However, the current challenges with cloud logs are not easily fixed. If on-premises logs are proving inadequate, analysts can try switching vendors to improve their accuracy and efficacy. But in the cloud, they can find themselves caught between a log and a hard place. This is because cloud providers like AWS or Azure have total control over which logs are available, and how they are presented. This means it’s up to the cloud providers to improve the quality of cloud logs and bolster security for their customer base.

The good news is there are clear steps providers can take. Firstly, cloud providers must act to thoroughly document events and fields. This will help to provide clear sight over the additions and removals of log operations. Ensuring quick delivery of records is equally important to provide efficient data analysis. By following these practices, cloud service providers can enhance the overall usability and effectiveness of logs, promoting better insights and troubleshooting capabilities for users.

Using AI to nullify the spiral of more

While providers must focus on updating cloud logs to improve efficacy, organisations must do what they can to minimise their cloud risk.  Although it will always be a challenge for organisations to control external factors like the growing attack surface, they can control the impact of the ‘spiral of more’ on their security teams.

This means using AI to improve signal clarity, which will reduce the burden on analysts when detecting and responding to attacks either on-prem or in the cloud. Better signal clarity will ensure teams can accurately identify and prioritise real attacks, putting organisations in the strongest position to defend themselves against modern threats. After all, the more effective the threat signal, the more cyber resilient and effective the security operations centre (SOC) becomes, which is critical in an increasingly cloud-based world.

 Mark Wojtasiak is vice president of product marketing – research and strategy at Vectra



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *