Security Experts:

What's Driving Stress Levels of Security Operations Teams?

Security Operations Teams Are Overwhelmed by Vulnerabilities and Volume of Threat Alerts, Study Finds

One of the reasons the WannaCrypt ransomware spread so far and so fast is because it leveraged what was for some Windows users a 0-day exploit, and for others an n-day exploit. For users of unsupported Windows version, it was 0-day -- there had been no patch. But for many users of supported versions of Windows, it was an n-day exploit; that is, the exploit was used during the variable-n number of days between a patch being issued by Microsoft, and the patch being implemented by the user.

N-day exploits are an increasing problem because, if anything, the time between issue and implementation of patches is increasing. 

A new study, prepared for Bay Dynamics by EMA and published today, helps to explain why this is happening. Bay Dynamics, a maker of cyber risk analytics software, completed a $23 million Series B financing round in July 2016.

Four hundred security professionals ranging from management to operational staff in mid-market, enterprise and very large enterprise organizations and representing a wide range of industry sectors were questioned about stress in their daily lives.

What emerged, in a nutshell, is that operations staff are overwhelmed by the sheer volume of vulnerabilities; they are falling behind in efforts to remediate them; and tend to under-report the problem to their seniors. 

To put this into context, on average, a mid market firm might have 10 full time staff servicing ten new vulnerabilities per month across just under 2,000 assets (almost 20,000 vulnerabilities to service every month). For a very large enterprise those figures translate to 100 staff servicing more than 1.3 million vulnerabilities every month. Seventy-four per cent of security teams admit they are overwhelmed by the volume of maintenance work required.

Since full and timely patching is an impossibility, security teams are required to prioritize their efforts -- but this is also a problem. Nearly 80% of the respondents admitted that their patching approval process is significantly manual. "This," notes the report, "included emails, spreadsheets, and other electronic documents for tracking and approval. With the volumes of patching that have to be reviewed, these labor intensive manual steps drive high inefficiencies and stress."

To be fair, 'too many vulnerabilities' is not considered to be the primary stress driver for security teams. It ties in second place (at 21%) with stress caused by management, one point behind the primary cause of stress, 'not enough manpower'. The report postulates that security teams "are creating a security facade around their security program maturity. This could be a natural extension of what they are conveying to their upper management."

If this is true, it would go a long way to explain the often-discussed disparity between operations staff and senior management over the maturity of an organization's security posture: senior management invariably claims a more mature posture than that reported by security operations.

The survey also makes clear that the prioritization of vulnerabilities and threats is also problematic. Sixty-eight percent of respondents prioritize vulnerabilities based on their severity. This severity is relatively easy to gauge from the vendor's alert and the IT infrastructure. Threats, however, are a little different.

Fifty-eight respondents prioritize vulnerabilities based on the severity of identified threats -- but 52% of threat alerts are improperly prioritized by systems and must be manually reprioritized.

"While severity of alerts should be a key indicator of how both vulnerabilities and threats should be prioritized for action by operations," suggests the report, "it is not the only factor and should not be considered the primary indicator unless the prioritization algorithm has sufficient context within its framework."

The problem here is that the majority of current alerting systems, such as SIEMs, do not usually provide sufficient context for automatic priority decision-making. Newer machine-learning anomaly detection systems have the potential, eventually, to provide better and more complete context; but for now, they are known to create a high level of false positives.

The difficulty in being able to automatically and correctly prioritize vulnerabilities is delaying their solution. Analysts are spending between 24 and 30 minutes investigating each alert; and are falling behind. Sixty-four percent of alert tickets are not worked per day, and analysts are continuously falling further behind in their workload -- explaining why 'dwell time' for breaches is over six months.

There are two possible solutions. The first is more manpower -- but given the sparsity of suitable security analysts, this would be difficult. The second is automation through better security tools. 

"To succeed," suggests the report, "tools must be made smarter by providing more useful context around the technical, financial, and behavioral aspects of the incidents. This will reduce the number of false positives and misclassified alerts so that only the real, most critical threats are at the top of the investigation pile." If this can be achieved, "a day in the life of a security pro will become significantly less stressful." And the next WannaCrypt perhaps a little less successful.

view counter
Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.