Security Experts:

Scouting the Adversary: Network Sensor Placement Considerations

Proper Network Sensor Placement Helps Security Analysts Focus on Events That Matter

Whether you are fighting a real battle or a cyber battle, having line of sight over the battlefield can mean the difference between victory or defeat. Past readers of this column will already know the importance of gaining and maintaining terrain visibility, which is perhaps the single most decisive advantage an organization can hold. Despite that, visibility remains one of the most pressing challenges for organizations today.

So how can security organizations improve their visibility? One of the most impactful changes they could make is to re-evaluate their network sensor placement.

Cyber Key Terrain

In real world battlefield scenarios, the United States Army evaluates terrain based on:

• Observation and Fields of Fire

• Avenues of Approach

• Key and Decisive Terrain

• Obstacles

• Cover and Concealment

These factors (commonly abbreviated as OCOKA) are all considerations that are just as relevant to cyber terrain as real-world terrain, but the first concept we would like to focus on here is “key terrain.” Key terrain is essentially any terrain that would cede a major advantage to either combatant if it fell under their control.

Scouting the Battlefield

Network traffic analysis (NTA) sensors are essentially your scouts on the cyber battlefield. Just like scouts, each sensor has its own unique vantage point. While no single scout can see the entire battlefield from their individual vantage point (Observation and Fields of Fire), together they provide reports that give you a complete picture of the terrain you are fighting on. The goal is to position your scouts in a way that maximizes visibility while minimizing any overlap in their lines of sight. This helps to cut down on the amount of redundant information reported, while ensuring you still retain total control over the battlefield.

Data Center NetworkSo, what are NTA sensors? These sensors are the components that monitor your network for activities that may indicate advanced threats, malware, and data theft. Sensors analyze network traffic, cloud traffic, web traffic and email traffic, and deliver alerts and session data or logs. Sensors report network alerts and metadata to your on-premises Network Enterprise appliances or to the remote Network Cloud. Your configuration will depend on your environment. However, in either instance, sensors are only as useful as their positioning.

Out-Positioning the Adversary

The first problem for many organizations is an awareness problem. Simply put, many organizations are not aware that their sensors are improperly or inefficiently deployed because they do not have a full understanding of how to act on the network traffic being collected.

In order to gain more immediately actionable reports, organizations will have to carefully consider their sensor placement strategy. In many cases today, overlapping and duplicative sensors have resulted in a deluge of alerts – many either redundant or false positives. As a result, analysts are overburdened and fatigued, and organizations struggle to respond to incidents decisively with strained resources. On the other hand, some sensors may be improperly placed, and their line of sight is blocked by Obstacles within the network, or by adversary Cover and Concealment actions. Either of these scenarios result in blind spots that the attacker will leverage to their advantage to move deeper into the network.

To counteract this, organizations need to revisit their sensor placement. To do this effectively, organizations should focus on identifying attackers’ Avenues of Approach throughout the network – asking themselves:

• Where do I currently have visibility? (i.e., where do I have Observation and Fields of Fire)?

• Where will the adversary maneuver in order to reach target assets? (i.e., where are their Avenues of Approach)?

• Where is the network/endpoint segmented off (i.e., where do Obstacles exist)?

• Where do we already have protection from attacks (i.e., where is our existing Cover)?

This can be accomplished by focusing on assets that need to be protected. Therefore, proper sensor placement is ultimately derived from an understanding of the location of key assets (crown jewels), asset placement, paths to and from assets, ingress and egress paths, vulnerable hosts, and those hosts that are within proximity. The key terrain doctrine provides you guidance on how to conduct activities, protect, and defend your organization’s crown jewels. If visibility is minimal around and near and surrounding devices, will be difficult to determine how a campaign will be run against those items/elements. By understanding how these assets are positioned, you gain an understanding of attacker objectives, which then illuminates sensor placement.

Sensor Placement Considerations

The amount of useful information that sensors provide is directly dependent upon the positioning of those sensors within the network. For example, a sensor placed inside a network firewall can see only the traffic that is allowed past the firewall, or internal traffic that stays entirely within the firewall. However, it will not be able to see any traffic outside of the firewall due to access controls that may prevent visibility outside the network. In order to establish holistic NTA and data loss prevention (DLP) capabilities, organizations will have to consider the balance of their sensor placement. 

This focus on effective sensor placement is vital to preventing information or alert overload, as too many sensors will generate redundant alerts that can quickly overwhelm analysts and prove counterproductive to the end-goal of speeding up detection and response capabilities. This becomes an optimization problem: how can you minimize the number of sensors while maximizing visibility. At the same time, organizations must orient sensors in relation to critical network assets, such as crown jewels, to promote both optimal and maximum visibility.

Traditionally, organizations have relied on perimeter defenses (to include firewalls, intrusion detection systems, intrusion prevention systems, and other network traffic analysis appliances) – sensors placed at the network perimeters with the objective of passive or active monitoring of traffic at the border. However, as networks have become increasingly distributed and more complex, this strategy has become far too risky and porous. Attackers that manage to evade detection at the early stages of the attack kill chain may have freedom to move undetected after moving past initial defenses. With the flaws of this traditional approach readily apparent, we have seen a shift in sensor placement strategy that seeks to maximize visibility, while reducing ambiguity around detecting events along the attack path on or near crown jewel assets.

However, simply evolving past perimeter-oriented defenses is not enough. Sensors are still typically configured to alert on potentially malicious traffic based on rules. While this may sound good in theory, it can quickly become another hindrance if sensors are not able to intelligently detect anomalies in the proximity of vulnerable or targeted assets. This can further contribute to alert overload, leaving analysts to manually sort through hundreds, if not thousands, of potential false positives and uncorrelated alerts.

Therefore, one of the primary benefits of re-evaluating sensor placement is the ability to minimize alerts while simultaneously gaining actionable context from sensor-collected metadata. Proper sensor placement ultimately helps optimize time, allowing analysts to focus on the events that matter, and thereby increasing the chances for teams to better defend their networks. Advantages organizations can gain through simple sensor reevaluation include:

• Reducing the time it takes to detect and resolve incidents: empower security analysts to move from alert to investigation, quickly receive relevant information and apply threat intelligence to network data.

• Correlating seemingly unrelated network activity and behavior: By applying automated hunting and security analytics to retrospective metadata gathered by sensors during every network session, analysts can correlate seemingly unrelated network activity.

• Identifying and stopping advanced targeted attacks as they are beginning: By quickly identifying malicious behavior including activity in network metadata, command and control activity and lateral movement, analysts can stop data theft before it takes place.

view counter
Craig Harber joined Fidelis Cybersecurity as Chief Technology Officer following a distinguished career at the National Security Agency (NSA), and most recently USCYBERCOM, where he held senior technical roles driving major initiatives in cybersecurity and information assurance, having far reaching strategic impact across the Department of Defense (DOD) and Intelligence Community (IC). During his career at the NSA, Harber earned a reputation as a respected authority on technical strategies to fully integrate and synchronize investments in cybersecurity capabilities. He invented the threat-based cybersecurity strategy known as NIPRNet SIPRNet Cyber Security Architecture Review (NSCSAR) that provided DOD policymakers a framework to objectively measure the expected value of cybersecurity investments. He transformed Active Cyber Defense concepts into capability pilots, commercial product improvements, industry standards, and operational solutions. He also directed the Integrated Global Information Grid (GIG) IA Architecture; raising the importance of IA to all warfighting platforms resulting in multi-billion dollar increase in DOD IA investments.