Security Experts:

Measuring Cybersecurity Training Effectiveness

As your organization reviews the training program, you could start to identify processes that are broken

An old adage says that 50% of advertising is wasted, but you can’t tell which 50%. Some people think the same—or worse—happens when it comes to training programs. It’s hard to know what works and why, but it’s not impossible. And the benefits are important. 

In cybersecurity, threats and the solutions to fight them are constantly evolving, so your security operations center needs to have staff that’s up-to-date on both. But spending time and budgets on training programs can be a hard sell, especially when cybercriminals seem to be striking everywhere these days. 

It’s important to show the effectiveness of training initiatives. Managers want to make sure that they're getting a return on their investment, especially if they must report up to senior management, and justify future training budgets requests. They want to be able to measure progress in order to show they’re spending that money wisely, getting something out of it.

Learners want to see that measurement, too, because people want to track their progress, and they want to feel good about what they're doing and how they’re spending their limited time. They want to make sure that the time spent is making a difference in their job and performance. So much of staff training tends to be a tick-the-box exercise they are forced to go through, with no real measurement of progress. We tend to forget, or just not take it as seriously as we do when there's actual progress tracked.

How do you measure your training? 

There are many ways to measure the effectiveness of training programs. For example, the Kirkpatrick Evaluation Model is one of the most widely used and popular frameworks available:

● Reaction: At the most basic level, you can measure how learners react to the program through surveys or question and answer sessions. You can ask questions such as: Was it relevant? How do you feel about it? What was good and what was bad about it? At this level, measurement is more about understanding how the learner has perceived the training, not whether it achieved the final goal of behavior change. Instead, you’re negotiating the buy-in from the learners by getting some feedback and taking their reactions into account early in the process, to really develop the program. That way, it's not one of those forced training exercises that feel to them like: "Take this because I tell you to take it."

● Progression: Once the program is in full swing, you can begin to evaluate the actual learning, and introduce key performance indicators (KPIs) into the evaluation. Those measures can show how someone has improved: Did they get X amount of answers right, or have they mastered new skills at the end of the training? You can measure based on certifications earned or LinkedIn badges, or through a quiz or a test at the completion of that actual training. If you're able to demonstrate proficiency by obtaining that certification or that badge, you can show that learning's taken place.

● Behavior Change: This is the stage when you start to get into measuring the knowledge transfer, that higher level of results when your people begin to perform certain functions the way you want them performed. In security  training, you can observe how staffers are responding to threats and performing other tasks. Are they resolving more issues on their own, without having to go to more senior members of the team for advice? If they're training on a particular security solution, are they using this solution the right way? Are they making maximum use out of it, and are they using it in a way that helps them gain more insight into what they're supposed to be doing in their jobs? 

This stage really gets to that higher level of learning  that’s the goal of most security training, so it’s harder to measure. It requires some self-reflection in both group and one-on-one discussions. You can even do self-assessments at this point, asking learners if they are noticing any changes in the way that they're working, or their job performance.

● Results: At this stage you start to see those business results you want to show the C-suite. In a security operations center, you could actually notice some of those metrics that you're measuring against: Is your SOC able to reduce the time to find or identify a threat? Have they improved detection rates historically compared to before they had the training? Is the training having any impact on reducing dwell time? Are they discovering threats and doing remediation more quickly?

You could even measure for some higher-level impacts, such as morale and staff retention. We always look at hard metrics such as days to spot a threat or days to reduce dwell time, but we tend to forget about the people. Are you providing appropriate training that makes their job easier, makes them a little bit more productive, less stressed out or underwater? What’s the impact of the training program aside from improving the company’s security postures? 

Once you measure, what then?  

Now that the results have been measured and the data gathered, the question becomes how to apply that knowledge. Experts often warn that training is not a one-and-done exercise, so the best place to leverage that data is preparing for the next training program, spotting ways to improve and guide people down the right path to more learning. Use the measurements from one program to build some guidance around the next. Maybe your SOC needs more training in one particular area, or maybe the organization needs to refine its goals and measurements:  Are our training goals effective? Are we measuring the right stuff? Those are good questions to ask as the program matures.

As your organization reviews the training program, you could start to identify processes that are broken. As your people are learning new things and showing growth in some of these areas, you may be outgrowing some of the old processes. Now you can change the way you do things in your organization, and really take things up to the next level, going deeper in areas where there are gaps in training or making adjustments to the way you do things to mature processes to leverage those new skills picked up in training.

Many surveys show that hands-on learning on the job is the most effective way to develop and improve skills. In cyber security, measurement can identify when learners have acquired the comfort level to mentor or even train others in those skills that they've mastered. That social ownership, or the ability to reach their peers, should be a key measurement of success. Career development's a good way to measure, because if you're training people effectively, they should be growing in their careers with you which will likely improve retention rates. 

The old way of measuring the effectiveness of your training was very cut and dry—take a pre-test and take a post-test to show your progress, or rely on a pre-training survey and a post-training survey. But as staffers now take training more seriously, and they expect it as part of their professional development at work, measuring results can’t be that simple. 

view counter
Jeff Orloff is Vice President of Products and Technical Services at RangeForce, a cybersecurity training company. He has over ten years of experience in cybersecurity, computer and network security and system administration. Prior to RangeForce, he was Director of Product Management and UX at COFENSE, a company specializing in email security, phishing detection and response. He also served as Technology Coordinator for the Palm Beach County Florida School District.