Security Experts:

Let's Get Serious About Security Metrics

There are many topics in security that generate quite a bit of discussion when someone brings them up.  Unfortunately, metrics isn’t one of those topics.  Sadly, more often than not, bringing up the topic of metrics is a great way to create awkward silence in a room.

So, why is it that metrics is nearly always a conversation stopper?  There are likely many reasons.  I’ve found that people enjoy discussing topics they are passionate about, have experience with, or are knowledgeable on.  As much as it pains me to say this, metrics almost never falls into any of those categories.

The time to get serious about metrics is long overdue.  Of course, every organization is different and has its own risks, controls, goals, and priorities against which it would like to measure its performance.  And yes, there are many different techniques one could use to approach the subject of metrics.

Security Metrics

That being said, there are still some foundational principles that have helped me build meaningful and relative metrics throughout the years.  It is my hope that organizations will find these suggestions helpful as they work towards building and improving their own metrics.

It is in this spirit that I offer five helpful tips for building meaningful security metrics:

1. Measure what matters:  It might sound like an obvious piece of advice, but one important rule when building meaningful metrics is to measure only what matters.  To whom must it matter you ask?  To the stakeholders that will consume your metrics.  In other words, know your audience and know what interests them.  For example, an aggregated list of every alert that fired with a count of how many times it fired is likely not particularly interesting to your metrics audience.  As a different tactic, consider correlating specific alerts that have fired to specific risks that your audience is concerned about.  Once you’ve done that, you can measure and show how your security monitoring program has visibility into those risks and has handled them in a timely manner.  This shows your audience that your information security maturity has reduced their exposure and risk.  That will interest them.

2. So what?:  What’s the worst question a security professional can receive after showing a metric?  “So what?”  When the metrics audience asks that question, it means they don’t understand the relevance of what you’re reporting to them.  In other words, that audience has certain objectives that they are looking for the security team to address.  What is being reported to them doesn’t speak to those objectives and doesn’t allow them to assess the security team’s progress against those objectives.  Meaningful metrics come about by anticipating the “So what?” question and developing ways to report on progress against objectives in a way that the metrics audience understands.

3. Less is more:  I’ve long been a believer of the less is more philosophy.  Why make something needlessly complicated when it can be solved easily?  The hardest part about building a successful metrics program should be understanding what matters, what its relevance is to the target audience, and how to measure it properly.  Once those fundamental elements are in place, reporting the results should be kept as simple as possible.  There is no need to overcomplicate.  If a metric is good, then reporting it in a straightforward manner with as few data points as necessary to accurately represent it is the preferred route.  The metrics audience is usually concerned most about understanding, measuring, and mitigating risk.  If you’ve done your homework, you’ll address their concerns.  Making your reporting needlessly complicated will only muddy the issue. 

4. Ready, aim, fire:  We’ve heard this appropriately ordered expression many times, and it sounds quite logical.  Unfortunately, in many metrics programs, the order is sometimes ready, fire, aim, or even just fire.  There is often a tendency to over-report data much too quickly in an attempt to produce metrics of value.  In the absence of a formalized approach, many organizations tend to report as many data points as they can think of.  The noisy stream of data points that results is, of course, nearly never of interest to the metrics audience.  True, there is near constant pressure to show value and progress against goals.  That being said, time needs to be taken to ensure that metrics produced and reported are meaningful and of value.  Otherwise, the data points you report will likely generate a large number of questions from your audience while providing them no answers.

5. It’s all relative:  If you’ve done a good job developing good metrics that accurately measure the security program’s progress against its objectives and its efforts to mitigate risk, pat yourself on the back.  There is one very important point to remember though.  Metrics need to be relative.  What does that mean?  It means that security organizations don’t exist in a vacuum and don’t grow and mature overnight.  As such, it’s important to show progress by including metrics that are adjusted for the organization they represent.  At times, this takes the form of month over month, quarter over quarter, or year over year data points.  At other times, data may need to be normalized per host, per employee, or per location.  At still other times, data may need to be shown compared to benchmarks within the industry.  The purpose of all of these relative metrics is to keep a security team away from showing absolute numbers that don’t give the audience a way to compare and contrast performance current and past, once normalized, and versus peer organizations.  And above all, that is what the metrics consumer is after - how to best measure success.

view counter
Joshua Goldfarb (Twitter: @ananalytical) is an experienced information security leader who works with enterprises to mature and improve their enterprise security programs. Previously, Josh served as VP, CTO - Emerging Technologies at FireEye and as Chief Security Officer for nPulse Technologies until its acquisition by FireEye. Prior to joining nPulse, Josh worked as an independent consultant, applying his analytical methodology to help enterprises build and enhance their network traffic analysis, security operations, and incident response capabilities to improve their information security postures. He has consulted and advised numerous clients in both the public and private sectors at strategic and tactical levels. Earlier in his career, Josh served as the Chief of Analysis for the United States Computer Emergency Readiness Team (US-CERT) where he built from the ground up and subsequently ran the network, endpoint, and malware analysis/forensics capabilities for US-CERT.