Security Experts:

The Truth About Micro-Segmentation (Part 2)

“Why cover the same ground again?  It goes against my grain to repeat a tale told once, and told so clearly.” ― Homer, The Odyssey

For the past few decades, visibility has been the Odyssey of security professionals. The saying, “You can't protect what you can’t see” has launched a thousand security startups, most to fatally founder on irrelevance or poor execution.

In the data center and cloud security world, the role of visibility resurfaces with a redoubled effort.  Most data centers are built on the "hard exterior" --i.e., firewalled perimeter with a soft chewy open interior school of network security. With the increasing spread of attacks inside the data center and cloud -- malware, insider threats, or simply application or communications vulnerabilities exploited by bad actors – there is a growing focus around segmentation as a core data center strategy.

Gartner Distinguished Analyst and VP Greg Young has suggested:

“[Security and risk management leaders] should also consider redesigning their assets and moving different assets into more secure locations, or segmenting to add floodwalls between parts of their organization. Adding these obstacles will make it more challenging for hackers to penetrate an organization.”

However, strong microsegmentation approaches cannot be implemented unless IT Operations and Security have clear visibility into how their applications and communicating so that they can determine quickly what should be communicating. 

This requires going beyond traditional network visibility to understand how the applications dependencies actually work.  The current parlance around this capability is Application Dependency Mapping. It cannot be produced however, if you only see what happens on the network.  The applications and the hosts they sit on must be included in a live, continuous map.  You need real-time understanding of both vectors to build a cybersecurity approach to protect your data center.

The Neat & Clean World and The Real World

Traditionally, application dependency maps are built manually, as network flow and one server at a time.  This approach is nearly unworkable in the largest data center and cloud environments.

The marketectures have a symbolic icon views to describe a perfect, 3-tier application (caveat emptor: the graphic below is provided by my employer):

3-tier application

When you get past the marketing side of things, there is a strong movement in the industry to use D3 Javascript diagrams to create stronger visibility into application and network environments.

Automated data collection and new visualization tools do a better job of creating the map.  However, most vendors still offer simple, stylized view of an application dependencies which are not particularly useful at scale.  Two new examples include 

DS3 “chord diagrams” can offer directed relationships among a set of entities.

chord diagram

“Sunburst” diagrams go a step further to show relationships as well as application groups.

Beyond the Neatly Presented View of Your Application Communications 

In reality, most data centers or even simply applications do not remotely look the earlier diagrams from an Application Dependency Map.  They actually look like this:

Application Dependency Map

How many workloads/servers do you think this ADM involves (answer below) *

To bring visibility and application dependency mapping to microsegmentation, both systems must have these 5 properties.

• Work at scale, up to tens of thousands of workloads and hundreds of thousands of other objects in the map, including laptops

• Be precise enough to support a large whitelist model 

• Adapt to changes in the environment

• Work in all environments and infrastructures as changes applications or migration to the cloud

• Provide the intelligence and automation to eliminate the manual model that scale deployments make nearly impossible.

From a security perspective, to create understand application dependencies you need not only to understand the flows and servers, you need to understand the ports and underlying processes.  Most servers will have dozens of open, hence vulnerable ports.  More significantly, these maps must be living systems, not one time snapshot.  otherwise it is impossible to keep up with the dynamic and distributed nature of today’s cloud and microservices architectures.

The big change is rather than the Visio world much of the network world originated from, we are moving into a metadata world.  Powerful and accurate microsegmentation requires the ability to ingest metadata from CMDBs, spreadsheets or derive it from real-time observation of traffic flows.  Moreover, the system needs to suggest microsegmentation rules based on observation of behavior.  This means application dependency and segmentation are two sides of a single system and IT and Network teams should be mindful of gluing together two separate systems.

In part III of this series I will conclude by reviewing the tradeoffs of various microsegmentation approaches.

*Less than 75!

view counter
Alan S. Cohen is chief commercial officer and a board member at Illumio. He leads Illumio’s go-to-market strategy and customer engagement life cycle organizations, including marketing, support, talent and IT. He is a 25-year technology veteran known for company building and new-market-creation experience. Alan’s prior two companies, Airespace (acquired by Cisco) and Nicira (acquired by VMware), were the market leaders in centralized WLANs and network virtualization, respectively. He also is an advisor to several security companies, including Netskope and Vera.