Marketing ICS Vulnerabilities and POC Malware – You’re Doing it Wrong

April 30, 2017

There’s been two cases recently of industrial control system (ICS) security firms identifying vulnerabilities and also creating proof of concept (POC) malware for those vulnerabilities. This has bothered me and I want to explore the topic in this blog post; I do not pretend there is a right or wrong answer here but I recognize by even writing the post I am passing judgement on the actions and I’m ok with that. I don’t agree with the actions and in the interest of a more public discussion below is my rationale.

Background:

At the beginning of April 2017 CRITIFENCE an ICS security firm published an article on Security Affairs titled “ClearEnergy ransomware aim to destroy process automation logics in critical infrastructure, SCADA, and industrial control systems.” There’s a good overview of the story here that details how it ended up being a media stunt to highlight vulnerabilities the company found. The TL;DR version of the story is that the firm wanted to highlight the vulnerabilities they found in some Schneider Electric equipment that they dubbed ClearEnergy so they built their own POC malware for it that they also dubbed ClearEnergy. But they published an article on it leaving out the fact that it was POC malware. Or in other words, they led people to believe that this was in-the-wild (real and impacting organizations) malware. I don’t feel there was any malice by the company, as soon as the article was published I reached out to the CTO of CRITIFENCE and he was very polite and responded that he’d edit the article quickly. I wanted to write a blog calling out the behavior and what I didn’t like about it as a learning moment for everyone but the CTO was so professional and quick in his response that I decided against it. However, after seeing a second instance of this type of activity I decided a blog post was in order for a larger community discussion.

On April 27th, 2017 Security Week published an article titled “New SCADA Flaws Allow Ransomware, Other Attacks” based on a presentation by ICS security firm Applied Risk at SecurityWeek’s 2017 Signapore ICS Cyber Security Conference. The talk, and the article, highlighted ICS ransomware that the firm dubbed “Scythe” that targets “SCADA devices.” Applied Risk noted that the attack can take advantage of a firmware validation bypass vulnerability and lock out folks’ ability to update to new firmware. The firm did acknowledge in their presentation though and in the article that this too was POC malware.

 

 

Scythe

Figure: Image from Applied Risk’s POC Malware

Why None of this is Good (In My Opinion):

In my opinion both of these firms have been irresponsible in a couple of ways.

First, CRITIFENCE obviously messed up by not telling anyone that ClearEnergy was POC malware. In an effort to promote their discovery of vulnerabilities they were quick to write an article and publish it and that absolutely contributed to hype and fear. Hype around these types of issues ultimately leads to the ICS community not listening to or trusting the security community (honestly with good reason). However, what CRITIFENCE did do that I liked (besides being responsive which is a major plus) was work through a vulnerability disclosure process that led to proper discussion by the vendor as well as an advisory through the U.S.’ ICS-CERT. In contrast, Applied Risk did not do that so far as I can tell. I do not know what all Applied Risk is doing about the vulnerabilities but they said they contacted the vendors and two of the vendors (according to the SecurityWeek article) acknowledged that the vulnerability is important but difficult to fix. The difference with the Applied Risk vulnerabilities is that the community is left unaware of what devices are impacted, the vendors haven’t been able to address the issues yet, and there are no advisories to the larger community through any appropriate channel. Ultimately, this leaves the ICS community in a very bad spot.

Second, CRITIFENCE and Applied Risk are both making a marketing spectacle out of the vulnerabilities and POC malware. Now, this point is my opinion and not necessarily a larger community best-practice, but I absolutely despise seeing folks name their vulnerabilities or naming POC malware. It comes off as a pure marketing stunt. Vulnerabilities in ICS are not uncommon and there’s good research to be done. Sometimes, the things the infosec community sees as vulnerabilities may have been designed that way on purpose to allow things like firmware updates and password resets for operators who needed to get access to sensitive equipment in time-sensitive scenarios. I’m not saying we can’t do better – but it’s not like the engineering community is stupid (far from it) and highlighting vulnerabilities as marketing stunts can often have unintended consequence including the vendors not wanting to work with researchers or disclose vulnerabilities. There’s no incentive for ICS vendors to work with firms who are going to use issues in their products for marketing for a firm’s security product.

Third, vulnerability disclosure can absolutely have the impact of adversaries learning how to attack devices in ways they did not know previously. I do not advocate for security through obscurity but there is value in following a strict vulnerability disclosure policy even in normal IT environments because this has been an issue for decades. In ICS environments, it can be upwards of 2-3 years for folks to be able to get a patch and apply it after a vulnerability comes out. That is not due to ignorance of the issue or lack of concern for the problem but due to operations constraints in various industries. So in essence, the adversaries get informed about how to do something they previously didn’t know about while system owners can’t accurately address the issues. This makes vulnerability disclosure in the ICS community a very sensitive topic to handle with care. Yelling out to the world “this is the vulnerability and oh by the way here’s exactly how you should leverage it and we even created some fake malware to highlight the value to you as an attacker and what you can gain” is just levels of ridiculousness. It’s why you’ll never see my firm Dragos or myself or anyone on my team finding and disclosing new vulnerabilities in ICS devices to the public. If we ever find anything it’ll only be worked through the appropriate channels and quietly distributed to the right people – not in media sites and conference presentations. I’m not a huge fan of disclosing vulnerabilities at conferences and in media in general but I do want to acknowledge that it can be done correctly and I have seen a few firms (Talos, DigitalBond, and IOActive come to mind) do it very well. As an example, Eireann Leverett and Colin Cassidy found vulnerabilities in industrial Ethernet switches and worked closely with the vendors to address them. After working through a very intensive process they wanted to do a series of conference presentations about them to highlight the issues. They invited me to take part to show what could be done from a defense perspective. So I stayed out of the “here’s the vulnerabilities” and instead focused on “these exists so what can defenders do besides just patch.” I took part in that research because the work was so important and Eireann and Colin were so professional in how they went about it. It was a thrill to use the entire process as a learning opportunity to the larger community. Highlighting vulnerabilities and creating POC malware for something that doesn’t even have an advisory or where the vendor hasn’t made patches yet just isn’t appropriate.

Closing Thoughts:

There is a lot of research to be done into ICS and how to address the vulnerabilities that these devices have. Vendors must get better at following best-practices for developing new equipment, software, and networking protocols. And there are good case-studies of what to do and how to carry yourself in the ICS security researcher community (Adam Crain and Chris Sistrunk’s research into DNP3 and all the things that it led to is a fantastic example of doing things correctly to address serious issues). But the focus on turning vulnerabilities into marketing material, discussing vulnerabilities in media and at conferences before vendors have addressed them and before the community can get an advisory through proper channels, and creating/marketing POC malware to draw attention to your vulnerabilities is simply, in my opinion, irresponsible.

Try these practices instead:

  • Disclose vulnerabilities to the impacted vendors and work with them to address them
    • If they decide that they will not address the issue or do not see the problem talk to impacted asset owners/operators to ensure what you see as a vulnerability is an issue that will introduce risk to the community and use appropriate channels such as the ICS-CERT to push the vendor or develop defensive/detection signatures and bring it to the community’s attention; sometimes you’re left without a lot of options but make sure you’ve exhausted the good options first
  • After the advisory is available (for some time you feel comfortable with) if you or your firm would like to highlight the vulnerabilities at a conference or in the media that’s your choice
    • I would encourage focusing the discussion on what people can do besides just patch such as how to detect attacks that might leverage the vulnerabilities
  • Avoid naming your vulnerabilities, there’s already a whole official process for cataloging vulnerabilities
  • (In my opinion) do not make POC malware showing adversaries what they can do and why they should do it (the argument “the adversaries already know” is wrong in most cases)
  • If you decide to make POC malware anyway at least avoid naming it and marketing it (comes off as an extremely dirty marketing approach)
  • Avoid hyping up the impact (talking about power grids coming down and terrorist attacks in the same article is just a ridiculous attempt to illicit fear and attention)

In my experience, ICS vendors are difficult to work with at times because they have other priorities too but they care and want to do the right thing. If you are persistent you can move the community forward. But the vendors of the equipment are not the enemy and they will absolutely blacklist researchers, firms, and entire groups of folks for doing things that are adverse to their business instead of working with them. Research is important, and if you want to go down the route of researching and disclosing vulnerabilities there’s value there and proper ways to do it. If you’re interested in vulnerability disclosure best practices in the larger community check out Katie Moussouris who is a leading authority on bug bounty and vulnerability disclosure programs. But please, stop naming your vulnerabilities, building marketing campaigns around them, and creating fake malware because you don’t think you’re getting enough attention already.

Analytical Leaps and Wild Speculation in Recent Reports of Industrial Cyber Attacks

December 31, 2016

“Judgement is what analysts use to fill gaps in their knowledge. It entails going beyond the available information and is the principal means of coping with uncertainty. It always involves an analytical leap, from the known into the uncertain.”

– Chapter 4, Psychology of Intelligence Analysis, Richards J. Heuer.

 

Analytical leaps, as Richards J. Heuer said in his must-read book Psychology of Intelligence Analysis, are part of the process for analysts. Sometimes though these analytical leaps can be dangerous, especially when they are biased, misinformed, presented in a misleading way, or otherwise just not made using sound analytical processes. Analytical leaps should be backed by evidence or at a minimum should include evidence leading up to the analytical leap. Unfortunately, as multiple analytical leaps are made in series it can lead to entirely wrong conclusions and wild speculation. There have been three interesting stories relating to industrial attacks this December as we try to close out 2016 that are worth exploring in this topic. It is my hope that looking at these three cases will help everyone be a bit more critical of information before alarmism sets in.

The three cases that will be explored are:

  • IBM Managed Services’ claim of “Attacks Targeting Industrial Control Systems (ICS) Up 110%”
  • CyberX’s claim that “New Killdisk Malware Brings Ransomware Into Industrial Domain”
  • The Washington Post’s claim that “Russian Operation Hacked a Vermont Utility, Showing Risk to U.S. Electrical Grid Security, officials say”

 

“Attacks Targeting Industrial Control Systems (ICS) Up 110%”

I’m always skeptical of metrics that have no immediately present quantification. As an example, the IBM Managed Security Services posted an article stating that “attacks targeting industrial control systems increased over 110 percent in 2016 over last year’s numbers as of Nov. 30.” But there is no data in the article to quantify what that means. Is 110% increase an increase from 10 attacks to 21 attacks? Or is it 100 attacks increased to 210 attacks?

The only way to understand what that percentage means is to leave this report and go download the IBM report from last year and read through it (never make your reader jump through extra hoops to get information that is your headline). In their 2015 report IBM states that there were around 1,300 attacks in 2015 (Figure 1). This would mean that in 2016 IBM is reporting they saw around 2,700 ICS attacks.

figure1

Figure 1: Figure from IBM’s 2015 Report on ICS Attacks

 

However, there are a few questions that linger. First, this is a considerable jump from what they were tracking previously and from their 2014 metrics. IBM states that the “spike in ICS traffic was related to SCADA brute-force attacks, which use automation to guess default or weak passwords.” This is an analytical leap that they make based on what they’ve observed. But, it would be nice to know if anything else has changed as well. Did they bring up more sensors, have more customers, increase staffing, etc. as the stated reason for the increase would not alone be responsible.

Second, how is IBM defining an attack. Attacks in industrial contexts have very specific meaning – an attempt to brute-force a password simply wouldn’t qualify. They also note that a pentesting tool on GitHub was released in Jan 2016 that could be used against the ICS protocol Modbus. IBM states that the increase in metrics was likely related to this tools’ release. It’s speculation though as they do not give any evidence to support their claim. However, it leads to my next point.

Third, is this customer data or is this honeypot data? If it’s customer data is it from the ICS or simply the business networks of industrial companies? And if it’s honeypot data it would be good to separate that data out as it’s often been abused to misreport “SCADA attack” metrics. From looking at the discussion of brute-force logins and a pentesting tool for a serial protocol released on GitHub, my speculation is that this is referring mostly to honeypot data. Honeypots can be useful but must be used in specific ways when discussing industrial environments and should not be lumped into “attack” data from customer networks.

The article also makes another analytical leap when it states “The U.S. was also the largest target of ICS-based attacks in 2016, primarily because, once again, it has a larger ICS presence than any other country at this time.” The leap does not seem informed by anything other than the hypothesis that the US has more ICS. Also, again there is no quantification. As an example, where is this claim coming from, how much larger is the ICS presence than other countries, and are the quantity of attacks proportional to the US ICS footprint when compared to other nations’ quantity of industrial systems? I would again speculate that what they are observing has far more to do with where they are collecting data (how many sensors do they have in the US compared to China as an example).

In closing out the article IBM cites three “notable recent ICS attacks.” The three case studies chosen were the SFG malware that targeted an energy company, the New York dam, and the Ukrainian power outage. While the Ukrainian power outage is good to highlight (although they don’t actually highlight the ICS portion of the attack), the other two cases are poor choices. As an example, the SFG malware targeting an energy company is something that was already debunked publicly and would have been easy to find prior to creating this article. The New York dam was also something that was largely hyped by media and was publicly downplayed as well. More worrisome is that the way IBM framed the New York dam “attack” is incorrect. They state: “attackers compromised the dam’s command and control system in 2013 using a cellular modem.” Except, it wasn’t the dam’s command and control system it was a single read-only human machine interface (HMI) watching the water level of the dam. The dam had a manual control system (i.e. you had to crank it to open it).

Or more simply put: the IBM team is likely doing great work and likely has people who understand ICS…you just wouldn’t get that impression from reading this article. The information is largely inaccurate, there is no quantification to their numbers, and their analytical leaps are unsupported with some obvious lingering questions as to the source of the data.

 

“New Killdisk Malware Brings Ransomware Into Industrial Domain”

CyberX released a blog noting that they have “uncovered new evidence that the KillDisk disk-wiping malware previously used in the cyberattacks against the Ukrainian power grid has now evolved into ransomware.” This is a cool find by the CyberX team but they don’t release digital hashes or any technical details that could be used to help validate the find. However, the find isn’t actually new (I’m a bit confused as to why CyberX states they uncovered this new evidence when they cite in their blog an ESET article with the same discovery from weeks earlier. I imagine they found an additional strain but they don’t clarify that). ESET had disclosed the new variant of KillDisk being used by a group they are calling the TeleBots gang and noted they found it being used against financial networks in Ukraine. So, where’s the industrial link? Well, there is none.

CyberX’s blog never details how they are making the analytical leap from “KillDisk now has a ransomware functionality” to “and it’s targeting industrial sites.” Instead, it appears the entire basis for their hypothesis is that Sandworm previously used KillDisk in the Ukraine ICS attack in 2015. While this is true, the Sandworm team has never just targeted one industry. iSight and others have long reported that the Sandworm team has targeted telecoms, financial networks, NATO sites, military personnel, and other non-industrial related targets. But it’s also not known for sure that this is still the Sandworm team. The CyberX blog does not state how they are linking Sandworm’s attacks on Ukraine to the TeleBots usage of ransomware. Instead they just cite ESET’s assessment that the teams are linked. But ESET even stated they aren’t sure and it’s just an assessment based off of observed similarities.

Or more simply put: CyberX put out a blog saying they uncovered new evidence that KillDisk had evolved into ransomware although they cite ESET’s discovery of this new evidence from weeks prior with no other evidence presented. They then make the claim that the TeleBots gang, the one using the ransomware, evolved from Sandworm but they offer no evidence and instead again just cite ESET’s assessment. They offer absolutely no evidence that this ransomware Killdisk variant has targeted any industrial sites. The logic seems to be “Sandworm did Ukraine, KillDisk was in Ukraine, Sandworm is TeleBots gang, TeleBots modified Killdisk to be ransomware, therefore they are going to target industrial sites.” When doing analysis always be aware of Occam’s razor and do not make too many assumptions to try to force a hypothesis to be true. There could be evidence of ransomware targeting industrial sites, it does make sense that they would eventually. But no evidence is offered in this article and both the title and thesis of the blog are completely unfounded as presented.

 

“Russian Operation Hacked a Vermont Utility, Showing Risk to U.S. Electrical Grid Security, officials say”

This story is more interesting than the others but too early to really know much. The only thing known at this point is that the media is already overreacting. The Washington Post put out an article on a Vermont utility getting hacked by a Russian operation with calls from the Vermont Governor condemning Vladimir Putin for attempting to hack the grid. Eric Geller pointed out that the first headline the Post ran with was  “Russian hackers penetrated U.S. electricity grid through utility in Vermont, officials say” but they changed to “Russian operation hacked a Vermont utility, showing risk to U.S. electrical grid, officials say.” We don’t know exactly why it was changed but it may have been due to the Post overreacting when they heard the Vermont utility found malware on a laptop and simply assumed it was related to the electric grid. Except, as the Vermont (Burlington) utility pointed out the laptop was not connected to the organization’s grid systems.

Electric and other industrial facilities have plenty of business and corporate network systems that are often not connected to the ICS network at all. It’s not good for them to get infected, and they aren’t always disconnected, but it’s not worth alarming anyone over without additional evidence.  However, the bigger analytical leap being made is that this is related to Russian operations.

The utility notes that they took the DHS/FBI GRIZZLY STEPPE report indicators and found a piece of malware on the laptop. We do not know yet if this is a false positive but even if it is not there is no evidence yet to say that this has anything to do with Russia. As I pointed out in a previous blog, the GRIZZLY STEPPE report is riddled with errors and the indicators put out were very non-descriptive data points. The one YARA rule they put out, which the utility may have used, was related to a piece of malware that is publicly downloadable meaning anyone could use it. Unfortunately, after the story ran with its hyped-up headlines Senator Patrick Leahy released a statement condemning the “attempt to penetrate the electric grid” as a state-sponsored hack by Russia. As Dimitri Alperovitch, CTO of CrowdStrike who responded to the Russian hack of the DNC, pointed out “No one should be making attribution conclusions purely from the indicators in the USCERT report. It was all a jumbled mess.”

Or more simply put: a Vermont utility acted appropriately and ran indicators of compromise from the GRIZZLY STEPPE report as the DHS/FBI instructed the community to do. This led to them finding a match to the indicator on a laptop separated from the grid systems but it’s not yet been confirmed that malware was present. The Vermont Governor Peter Shumlin then publicly chastised Vladimir Putin and Russia for trying to hack the electric grid. U.S. officials then inappropriately gave additional information and commentary to the Washington Post about an ongoing investigation which lead them to run with the headline that the this was a Russian operation. After all, the indicators supposedly were related to Russia because the DHS and FBI said so – and supposedly that’s good enough. Unfortunately, this also led a U.S. Senator to come out and condemn Russia for state-sponsored hacking of the utility.

Closing Thoughts

There are absolutely threats to industrial environments including ICS/SCADA networks. It does make sense that ICS breaches and attacks would be on the rise especially as these systems become more interconnected. It also makes perfect sense that ransomware will be used in industrial environments just like any other environment that has computer systems. And yes, the attribution to Russia compromising the DNC is very solid based on private sector data with government validation. But, to make claims about attacks and attempt to quantify it you actually have to present real data and where that data is coming from and how it was collected. To make claims of new ransomware targeting industrial networks you have to actually provide evidence not simply make a series of analytical leaps. To start making claims of attribution to a state such as Russia just because some poorly constructed indicators alerted on a single laptop is dangerous.

Or more simply put: be careful of analytical leaps especially when they are made without presenting any evidence leading into them. Hypotheses and analysis requires evidence else it is simply speculation. We have enough speculation already in the industrial industry and more will only lead to increasingly dangerous or embarrassing scenarios such as a US governor and senator condemning Russia for hacking the electric grid and scaring the public in the process when we simply do not have many facts about the situation yet.

Critiques of the DHS/FBI’s GRIZZLY STEPPE Report

December 30, 2016

On December 29th, 2016 the White House released a statement from the President of the United States (POTUS) that formally accused Russia of interfering with the US elections, amongst other activities. This statement laid out the beginning of the US’ response including sanctions against Russian military and intelligence community members.  The purpose of this blog post is to specifically look at the DHS and FBI’s Joint Analysis Report (JAR) on Russian civilian and military Intelligence Services (RIS) titled “GRIZZLY STEPPE – Russian Malicious Cyber Activity”. For those interested in a discussion on the larger purpose of the POTUS statement and surrounding activity take a look at Thomas Rid’s and Matt Tait’s Twitter feeds for good commentary on the subject.

Background to the Report

For years there has been solid public evidence by private sector intelligence companies such as CrowdStrike, FireEye, and Kaspersky that has called attention to Russian-based cyber activity. These groups have been tracked for a considerable amount of time (years) across multiple victim organizations. The latest high profile case relevant to the White House’s statement was CrowdStrike’s analysis of COZYBEAR and FANCYBEAR breaking into the DNC and leaking emails and information. These groups are also known by FireEye as the APT28 and APT29 campaign groups.

The White House’s response is ultimately a strong and accurate statement. The attribution towards the Russian government was confirmed by the US government using their sources and methods on top of good private sector analysis. I am going to critique aspects of the DHS/FBI report below but I want to make a very clear statement: POTUS’ statement, the multiple government agency response, and the validation of private sector intelligence by the government is wholly a great response. This helps establish a clear norm in the international community although that topic is best reserved for a future discussion.

Expectations of the Report

Most relevant to this blog, the lead in to the DHS/FBI report was given by the White House in their fact sheet on the Russian cyber activity (Figure 1).

 

figure1

 

Figure 1: White House Fact Sheet in Response to Russian Cyber Activity

The fact sheet lays out very clearly the purpose of the DHS/FBI report. It notes a few key points:

  • The report is intended to help network defenders; it is not the technical evidence of attribution
  • The report contains a combination of private sector data and declassified government data
  • The report will help defenders identify and block Russian malware – this is specifically declassified government data not private sector data
  • The report goes beyond indicators to include new tradecraft and techniques used by the Russian intelligence services

If anyone is like me, when I read the above I became very excited. This was a clear statement from the White House that they were going to help network defenders, give out a combination of previously classified data as well as validate private sector data, release information about Russian malware that was previously classified, and detail new tactics and techniques used by Russia. Unfortunately, while the intent was laid out clearly by the White House that intent was not captured in the DHS/FBI report.

Because what I’m going to write below is blunt feedback I want to note ahead of time, I’m doing this for the purpose of the community as well as government operators/report writers who read to learn and become better. I understand that it is always hard to publish things from the government. In my time working in the U.S. Intelligence Community on such cases it was extremely rare that anything was released publicly and when it was it was almost always disappointing as the best material and information had been stripped out. For that reason, I want to especially note, and say thank you, to the government operators who did fantastic work and tried their best to push out the best information. For those involved in the sanitation of that information and the report writing – well, read below.

DHS/FBI’s GRIZZLY STEPPE Report – Opportunities for Improvement

Let’s explore each main point that I created from the White House fact sheet to critique the DHS/FBI report and show opportunities for improvement in the future.

 The report is intended to help network defenders; it is not the technical evidence of attribution

There is no mention of the focus of attribution in any of the White House’s statements. Across multiple statements from government officials and agencies it is clear that the technical data and attribution will be a report prepared for Congress and later declassified (likely prepared by the NSA). Yet, the GRIZZLY STEPPE report reads like a poorly done vendor intelligence report stringing together various aspects of attribution without evidence. The beginning of the report (Figure 2) specifically notes that the DHS/FBI has avoided attribution before in their JARs but that based off of their technical indicators they can confirm the private sector attribution to RIS.

 

figure2

Figure 2: Beginning of DHS/FBI GRIZZLY STEPPE JAR

The next section is the DHS/FBI description which is entirely focused on APT28 and APT29’s compromise of “a political party” (the DNC). Here again they confirm attribution (Figure 3).

 

figure3

Figure 3: Description Section of DHS/FBI GRIZZLY STEPPE JAR

But why is this so bad? Because it does not follow the intent laid out by the White House and confuses readers to think that this report is about attribution and not the intended purpose of helping network defenders. The public is looking for evidence of the attribution, the White House and the DHS/FBI clearly laid out that this report is meant for network defense, and then the entire discussion in the document is on how the DHS/FBI confirms that APT28 and APT29 are RIS groups that compromised a political party. The technical indicators they released later in the report (which we will discuss more below) are in no way related to that attribution though.

Or said more simply: the written portion of the report has little to nothing to do with the intended purpose or the technical data released.

Even worse, page 4 of the document notes other groups identified as RIS (Figure 4). This would be exciting normally. Government validation of private sector intelligence helps raise the confidence level of the public information. Unfortunately, the list in the report detracts from the confidence because of the interweaving of unrelated data.

 

figure4

Figure 4: Reported RIS Names from DHS/FBI GRIZZLY STEPPE Report

As an example, the list contains campaign/group names such as APT28, APT29, COZYBEAR, Sandworm, Sofacy, and others. This is exactly what you’d want to see although the government’s justification for this assessment is completely lacking (for a better exploration on the topic of naming see Sergio Caltagirone’s blog post here). But as the list progresses it becomes worrisome as the list also contains malware names (HAVEX and BlackEnergy v3 as examples) which are different than campaign names. Campaign names describe a collection of intrusions into one or more victims by the same adversary. Those campaigns can utilize various pieces of malware and sometimes malware is consistent across unrelated campaigns and unrelated actors. It gets worse though when the list includes things such as “Powershell Backdoor”. This is not even a malware family at this point but instead a classification of a capability that can be found in various malware families.

Or said more simply: the list of reported RIS names includes relevant and specific names such as campaign names, more general and often unrelated malware family names, and extremely broad and non-descriptive classification of capabilities. It was a mixing of data types that didn’t meet any objective in the report and only added confusion as to whether the DHS/FBI knows what they are doing or if they are instead just telling teams in the government “contribute anything you have that has been affiliated with Russian activity.”

 

The report contains a combination of private sector data and declassified government data

This is a much shorter critique but still an important one: there is no way to tell what data was private sector data and what was declassified government data. Different data types have different confidence levels. If you observe a piece of malware on your network communicating to adversary command and control (C2) servers you would feel confident using that information to find other infections in your network. If someone randomly passed you an IP address without context you might not be sure how best to leverage it or just generally cautious to do so as it might generate alerts of non-malicious nature and waste your time investigating it. In the same way, it is useful to know what is government data from previously classified sources and what is data from the private sector and more importantly who in the private sector. Organizations will have different trust or confidence levels of the different types of data and where it came from. Unfortunately, this is entirely missing. The report does not source its data at all. It’s a random collection of information and in that way, is mostly useless.

Or said more simply: always tell people where you got your data, separate it from your own data which you have a higher confidence level in having observed first hand, and if you are using other people’s campaign names, data, analysis, etc. explain why so that analysts can do something with it instead of treating it as random situational awareness.

 

The report will help defenders identify and block Russian malware – this is specifically declassified government data not private sector data

The lead in to the report specifically noted that information about the Russian malware was newly declassified and would be given out; this is in contrary to other statements that the information was part private sector and part government data. When looking through the technical indicators though there is little context to the information released.

In some locations in the CSV the indicators are IP addresses with a request to network administrators to look for it and in other locations there are IP addresses with just what country it was located in. This information is nearly useless for a few reasons. First, we do not know what data set these indicators belong to (see my previous point, are these IPs for “Sandworm”, “APT28” “Powershell” or what?). Second, many (30%+) of these IP addresses are mostly useless as they are VPS, TOR exit nodes, proxies, and other non-descriptive internet traffic sites (you can use this type of information but not in the way being positioned in the report and not well without additional information such as timestamps). Third, IP addresses as indicators especially when associated with malware or adversary campaigns must contain information around timing. I.e. when were these IP addresses associated with the malware or campaign and when were they in active usage? IP addresses and domains are constantly getting shuffled around the Internet and are mostly useful when seen in a snapshot of time.

But let’s focus on the malware specifically which was laid out by the White House fact sheet as newly declassified information. The CSV does contain information for around 30 malicious files (Figure 5). Unfortunately, all but two have the same problems as the IP addresses in that there isn’t appropriate context as to what most of them are related to and when they were leveraged.

 

figure5

Figure 5: CSV of Indicators from the GRIZZLY STEPPE Report

What is particularly frustrating is that this might have been some of the best information if done correctly. A quick look in VirusTotal Intelligence reveals that many of these hashes were not being tracked previously as associated to any specific adversary campaign (Figure 6). Therefore, if the DHS/FBI was to confirm that these samples of malware were part of RIS operations it would help defenders and incident responders prioritize and further investigate these samples if they had found them before. As Ben Miller pointed out, this helps encourage folks to do better root cause analysis of seemingly generic malware (Figure 6).

figure6

Figure 6: Tweet from Ben Miller on GRIZZLY STEPPE Malware Hashes

So what’s the problem? All but the two hashes released that state they belong to the OnionDuke family do not contain the appropriate context for defenders to leverage them. Without knowing what campaign they were associated with and when there’s not appropriate information for defenders to investigate these discoveries on their network. They can block the activity (play the equivalent of whack-a-mole) but not leverage it for real defense without considerable effort. Additionally, the report specifically said this was newly declassified information. However, looking the samples in VirusTotal Intelligence (Figure 7) reveals that many of them were already known dating back to April 2016.

 

figure7

Figure 7: VirusTotal Intelligence Lookup of One Digital Hash from GRIZZLY STEPPE

The only thing that would thus be classified about this data (note they said newly declassified and not private sector information) would be the association of this malware to a specific family or campaign instead of leaving it as “generic.” But as noted that information was left out. It’s also not fair to say it’s all “RIS” given the DHS/FBI’s inappropriate aggregation of campaign, malware, and capability names in their “Reported RIS” list. As an example, they used one name from their “Reported RIS” list (OnionDuke) and thus some of the other samples might be from there as well such as “Powershell Backdoor” which is wholly not descriptive. Either way we don’t know because they left that information out. Also as a general pet peeve, the hashes are sometimes given as MD5, sometimes as SHA1, and sometimes as SHA256. It’s ok to choose whatever standard you want if you’re giving out information but be consistent in the data format.

Or more simply stated: the indicators are not very descriptive and will have a high rate of false positives for defenders that use them. A few of the malware samples are interesting and now have context (OnionDuke) to their use but the majority do not have the required context to make them useful without considerable effort by defenders. Lastly, some of the samples were already known and the government information does not add any value – if these were previously classified it is a perfect example of over classification by government bureaucracy.

 

The report goes beyond indicators to include new tradecraft and techniques used by the Russian intelligence services

The report was to detail new tradecraft and techniques used by the RIS and specifically noted that defenders could leverage this to find new tactics and techniques. Except – it doesn’t. The report instead gives a high-level overview of how APT28 and APT29 have been reported to operate which is very generic and similar to many adversary campaigns (Figure 8). The tradecraft and techniques presented specific to the RIS include things such as “using shortened URLs”, “spear phishing”, “lateral movement”, and “escalating privileges” once in the network. This is basically the same set of tactics used across unrelated campaigns for the last decade or more.

 

figure8

Figure 8: APT28 and APT29 Tactics as Described by DHS/FBI GRIZZLY STEPPE Report

This description in the report wouldn’t be a problem for a more generic audience. If this was the DHS/FBI trying to explain to the American public how attacks like this were carried out it might even be too technical but it would be ok. The stated purpose though was for network defenders to discover new RIS tradecraft. With that purpose, it is not technical or descriptive enough and is simply a rehashing of what is common network defense knowledge. Moreover, if you would read a technical report from FireEye on APT28 or APT29 you would have better context and technical information to do defense than if you read the DHS/FBI document.

Closing Thoughts

The White House’s response and combined messaging from the government agencies is well done and the technical attribution provided by private sector companies has been solid for quite some time. However, the DHS/FBI GRIZZLY STEPPE report does not meet its stated intent of helping network defenders and instead choose to focus on a confusing assortment of attribution, non-descriptive indicators, and re-hashed tradecraft. Additionally, the bulk of the report (8 of the 13 pages) is general high level recommendations not descriptive of the RIS threats mentioned and with no linking to what activity would help with what aspect of the technical data covered. It simply serves as an advertisement of documents and programs the DHS is trying to support. One recommendation for Whitelisting Applications might as well read “whitelisting is good mm’kay?”  If that recommendation would have been overlaid with what it would have stopped in this campaign specifically and how defenders could then leverage that information going forward it would at least have been descriptive and useful. Instead it reads like a copy/paste of DHS’ most recent documents – at least in a vendor report you usually only get 1 page of marketing instead of 8.

This ultimately seems like a very rushed report put together by multiple teams working different data sets and motivations. It is my opinion and speculation that there were some really good government analysts and operators contributing to this data and then report reviews, leadership approval processes, and sanitation processes stripped out most of the value and left behind a very confusing report trying to cover too much while saying too little.

We must do better as a community. This report is a good example of how a really strong strategic message (POTUS statement) and really good data (government and private sector combination) can be opened to critique due to poor report writing.

 

Update:

The DHS released an updated version which I thought did a great job of analysis; my analysis of it can be found here: https://www.sans.org/webcasts/104402

New Suspected Cyber Attack on Ukraine Power Grid – Advice as Information Emerges

December 19, 2016

Reporting in Ukraine has emerged indicating another suspected cyber attack on the electric grid (the first being the confirmed one in 2015). Initial reporting is often inaccurate or a small view of incidents but it’s worth cautiously watching and seeing what information emerges. Here’s what we know so far:

Reports of Suspected Cyber Attack:
Around noon of December 19th, 2016 reports began to surface related to a possible cyber attack on the Ukraine electric grid. The attack is suspected to have taken place near midnight local Ukraine time on the 17th. The Pivnichna transmission-level substations have been called out as possibly being the site attacked.  This is of course concerning for numerous reasons including the cyber attack on the Ukraine grid in December 2015 as well as traditional ongoing military actions in Ukraine. The reporting is from various Ukrainian sources including a press release from the impacted company Kyivenergo confirming that there was an unintentional outage and that they took actions to restore operations.

Analysis:
The first 24 and often 48 hours of reporting are notoriously bad for OSINT analysts but still should be utilized. Simply leverage caution and do not present information as facts yet. At this point I would assess with low confidence that the cyber attack has occurred. This is not to say there is doubt around the event only that there are other theories that have equal weighting until more evidence is available.  However, based on the sourcing of the information (internal Ukraine sources) and the Ukrainian grid operators’ experience dealing with a similar situation last year I have a higher trust level of the sources (thus the low confidence assessment that the attack is real). We will learn more later and it may be revealed that the outage was not related to a cyber attack; however I am aware of an investigation on going by Ukrainian authorities and they are treating the leading theory for the outage as a cyber attack.  I will caution again though that no one with direct knowledge of the attack has confirmed that it is a cyber attack; only that it is the leading theory and the disconnect was unintentional.

What Should Be Done:
Right now the best actions for those not on the ground or working at infrastructure companies is to wait and see if more information is revealed. Journalists should be cautious to infer or jump to conclusions and those in security community should stay tuned for more information. I would recommend journalists contact sources in the area but realize that the information is very preliminary and those not on the ground in Ukraine will have very little to add to knowledge on the situation.

If you are in the infrastructure (ICS/SCADA) security community it would be wise to use established channels to send decision makers a situational awareness report on the news; I would note it’s a low confidence assessment currently due to lack of first hand evidence but that it is a situation worth watching. This should be paired with security staff taking an active defense posture of monitoring the ICS network looking for abnormal activity. Preliminary information from the investigation underway by the Ukrainian authorities indicates that a remote attack is suspected.  I would stay far away from linking this to the Sandworm attack currently (attribution right now is not possible) but I would review the methods they achieved the remote attack on Ukraine last year and use that information to hunt for threats. As an example, look in logs for abnormal VPN session length, increased frequency of use, and unusual connection requests times.

If you happen to be a customer of Dragos, Inc. you will have received a notification already with some recommendations for strategic, operational, and tactical level players. Check your portal and be on the look out for a briefing request coming from us if you would like to attend remotely. For the wider community ensure that you are wary of phishing attempts taking advantage of this possible attack.

In Closing:
My chief recommendation is for everyone to avoid alarmism and utilize this as an opportunity to review logs and information from the ICS and search TTPs we’ve seen before such as remote usage of the ICS through legitimate accounts, VPNs, and remote desktop capabilities. If this attack turns out to be true it is unlikely it will be anything that is novel that couldn’t have been detected. It’s important to remember that defense is doable – now go do it.

Threats of Cyber Attacks Against Russia: Rationale on Discussing Operations and the Precedent Set

November 6, 2016

Reports that the U.S. government has military hackers ready to carry out attacks on Russian critical infrastructure has elicited a wide range of responses on social media. After I tweeted the NBC article a number of people responded with how stupid the U.S. was for releasing this information, or what poor OPSEC it was to discuss these operations, and even how this constitutes an act of war. I want to use this blog to put forth some thoughts of mine on those specific claims. However, I want to note in advance this is entirely my opinion. I wouldn’t consider this quality analysis or even insightful commentary but instead just my thoughts on the matter that I felt compelled to share since I work in critical infrastructure cyber security and was at one point a “military hacker.”

The Claim

The claim stems from an NBC article and notes that a senior U.S. intelligence official shared top-secret documents with NBC News. These top-secret documents apparently indicated that the U.S. has “penetrated Russia’s electric grid, telecommunications networks and the Kremlin’s command systems, making them vulnerable to attack by secret American cyber weapons should the U.S. deem it necessary.” I’m going to make the assumption that this was a controlled leak given the way that it was presented. Additionally, I make this assumption because of the senior officials that were interviewed for the wider story including former NATO commander (ret) ADM James G. Stavridis and former CYBERCOM Judge Advocate (ret) COL Gary Brown who likely would not have touched a true “leak” driven story without some sort of blessing to do so. I.e. before anyone adds that this is some sort of mistake this was very likely authorized by the President at the request of senior officials or advisers such as the Director of National Intelligence or the National Security Council. The President is the highest authority for deeming material classified or not and if he decided to release this information it’s an authorized leak. Going off of this assumption let’s consider three claims that I’ve seen recently.

The U.S. is Stupid for Releasing This Information

It is very difficult to know the rationale behind actions we observe. This is especially true in cyber intrusions and attacks. If an adversary happens to deny access to a server, did they intend to or was it accidentally brought down while performing other actions? Did the adversary intend to leave behind references to file paths and identifying information or was it a mistake? These debates around intent and observations is a challenge for many analysts that must be carefully overcome. In this case it is no different.

Given the assumption that this is a controlled leak it was obviously done with the intention of one or more outcomes. In other words, the U.S. government wanted the information out and their rationale is likely as varied as the members involved. While discussing a “government” it’s important to remember that the decision was ultimately in the hands of individuals, likely two dozen at the most. Their recommendations, biases, views on the world, insight, experience, etc. all contribute to what they expect the output of this leak to manifest as. This makes it even more difficult to assess why a government would do something since it’s more important to know the key members in the Administration, the military, and the Intelligence Community and their motivations rather than the historical understanding of government operations and similar decisions. Considering the decision was likely not ‘stupid’ and more for some intended purpose let’s explore what two of those purposes might be:

Deterrence

I’m usually not the biggest fan of deterrence in the digital domain as it has since not been very effective and the qualities to have a proper deterrent (credible threat and an understood threshold) are often lacking. Various governments lament about red lines and actions they might do if those red lines are crossed but what exactly those red lines are and what the response action will be if they are crossed is usually never explored. Here however, the U.S. government has stated a credible threat: the disruption of critical infrastructure in Russia (the U.S. has shown before that they are capable of doing this). They have combined this with a clear threshold of what they do not want their potential adversary to do: do not disrupt the elections. For these reasons my normal skepticism around deterrence is lessened. However, in my own personal opinion this is potentially a side effect and not the primary purpose especially given the form of communication that was chosen.

Voter Confidence

Relations between Russia and the U.S. this election have been tense. Posturing and messaging between the two states has taken a variety of forms both direct and indirect. This release to NBC though is interesting as it would be indirect messaging if positioned to the Russian government but it would be direct messaging if intended for the U.S. voters. My personal opinion (read: speculation) is that it is much more intended for the voters. At one point in the article NBC notes that Administration officials revealed to them that they delivered a “back channel warning to Russia against any attempt to influence next week’s vote”. There’s no reason to reiterate a back channel message in a public article unless the intended audience (in this case the voters) weren’t aware of the back channel warning. The article reads as an effort by the Administration to tell the voters: “don’t worry and go vote, we’ve warned them that any effort to disrupt the elections will be met with tangible attacks instead of strongly worded letters.”

It’s really interesting that this type of messaging to the American public is needed. Cyber security has never been such a mainstream topic before especially not during an election. This may seem odd to those in the information security community who live with these discussions on a day to day basis anyway. But coverage of cyber security has never before been mainstream media worthy for consistent periods of time. CNN, Fox, MSNBC, and the BBC have all been discussing cyber security throughout the recent election season ranging from the DNC hacks to Hillary’s emails. That coverage has gotten fairly dark though with CNN, NBC, Newsweek, and New York Times articles like this one and prime time segments telling voters that the election could be manipulated by Russian spies.

This CNN piece directly calls out the Kremlin for potentially manipulating the elections in a way that combines it with Trump’s claims that the election is rigged. This is a powerful combination. There is a significant portion of Trump’s supporters who will believe his claim of a rigged election and in conjunction with the belief that Russia is messing with the election it’s easy to see how a voter could become disillusioned with the election. Neither the Democrats or Republicans want less voters to turn out and (almost) all of those on both sides want the peaceful transition of power after the election as has always occurred before. Strong messaging from the Administration and others into mainstream news media is important to restore confidence to voters both in the election itself as well as the manner to which people vote.

Unfortunately, it seems that this desire is being accidentally countered by some in the security community. In very odd timing, Cylance decided to release a press release on vulnerabilities in voting machines on the same day, unbeknownst to them, as the NBC article. The press release stated that the intent of the release was to encourage mitigation of the vulnerabilities but with 4 days until the election, as of the article’s release, that simply will not be possible. The move is likely very well intended but unlikely to give voters much confidence in the manner to which they vote. I’ll avoid a tangent here but it’s worth mentioning the impact security companies can play on larger political discussions.

The Leak is Bad OPSEC

I will not spend as much time on this claim as I did the previous but it is worth noting the reaction that releasing this type of information is bad operational security. Operational security is often very important to ensure that government operations can be coordinated effectively without the adversary having the advance warning required to defend against the operation. However, in this case the intention of the leak is likely much more around deterrence or voter confidence and therefore the operation itself is not the point. Keeping the operation secret would not have helped either potential goal. More importantly, compromising information systems is not something that has ever been see as insurmountably difficult. For the U.S. government to reveal that it has compromised Russian systems does not magically make them more secure now. Russian defense personnel do not have anything more to go off of than before in terms of searching for the compromise, they likely already assumed they were compromised, and looking for a threat and cleaning it up across multiple critical infrastructure industries and networks would take more than 4 days even if they had robust technical indicators of compromise and insight (which the leak did not give them). The interesting part of the disclosure is not the OPSEC but in the precedence it sets which I’ll discuss in the next section.

The Compromises are an Act of War

Acts of war are governed under United Nations’ Article 2(4) where it discusses armed conflict. The unofficial rules regarding war in cyberspace are contained in the Tallinn Manual. In neither of these documents is the positioning of capabilities to do future damage considered an act of war. More importantly, in the NBC article it notes that the “cyber weapons” have not been deployed yet: “The cyber weapons would only be deployed in the unlikely event the U.S. was attacked in a significant way, officials say.” Therefore, what is being discussed is cyber operations that have gained access to Russian critical infrastructure networks but not positioned “weapons” to do damage yet. Intrusions into networks have never been seen as an act of war by any of the countries involved in such operations. So what’s interesting about this?  The claim by officials that the U.S. had compromised Russian critical infrastructure networks including the electric grid years ago.

For years U.S. intelligence officials have positioned that Russian, Chinese, Iranian, and at times North Korean government operators have been “probing” U.S. critical infrastructure such as the power grid. The pre-positioning of malware in the power grid has long been rumored and has been a key concern of senior officials. The acknowledgment in a possibly intended leak that the U.S. has been doing the same for years now is significant. It should come as no surprise to anyone in the information security community but as messaging from senior officials it does set a precedent internationally (albeit small given that this is a leak and not a direct statement from the government). Now, if capabilities or intrusions were found in the power grid by the U.S. government in a way that was made public the offending countries could claim they were only doing the same as the U.S. government. In my personal experience, there is credibility to claims that other countries have been compromising the power grid for years so I would argue against the “U.S. started it” claim that is sure to follow.  The assumption is that governments try to compromise the power grid ahead of time so that when needed they can damage it for military or political purposes. But the specific compromises that have occurred have not been communicated publicly by senior officials nor have they been done with attribution towards Russia or China. The only time a similar specific case was discussed with attribution was against Iran for compromising a small dam in New York and the action was heavily criticized by officials and met with a Department of Justice indictment.  Senior officials’ acknowledgment of U.S. cyber operations compromising foreign power grids for the purpose of carrying out attacks if needed is unique and a message likely heard loudly even if later denied. It would be difficult to state that the leak will embolden adversaries to do this type of activity if they weren’t already but it does in some ways make the operations more legitimate. Claiming responsibility for such compromises while indicting countries for doing the same definitely makes the U.S. look hypocritical regardless of how its rationalized.

Parting Thoughts

My overall thought is that this information was a controlled leak designed to help voters feel more confident in terms of both going to cast their ballots and in the overall outcome. Some level of deterrence was likely a side effect that the Administration sought. But no, this was not simply a stupid move nor was it bad OPSEC or an act of war. I also doubt it is simply a bluff. However, there is some precedent set and pre-positioning access to critical infrastructures around the world just became a little more legitimate.

One thing that struck me as new in the article though was the claim that the U.S. military used cyber attacks to turn out the lights temporarily in Baghdad during the 2003 Iraq invasion. When considering the officials interviewed for the story and the nature of the (again, possibly) controlled leak that is a new claim from senior government officials. There was an old rumor that Bush had that option on the table when invading Iraq but the rumor was the attack was cancelled for fear of the collateral damage of taking down a power grid. One can never be sure how long “temporary” might be when damaging such infrastructure. The claim in the article that the attack actually went forward would make that the first cyber attack on a power grid that led to outages – not the Ukrainian attack of 2015 (claims of a Brazilian outage years earlier were never proven and seem false from available information). However, the claim is counter to reports at the time that power outages did not occur during the initial hours of the invasion. Power outages were reported in Iraq but after the ending of active combat operations and looters were blamed. If a cyber attack in Iraq ever made sense militarily it would not have made as much sense after the initial invasion.

I’ve emailed the reporter of the story asking what the source of that claim was and I will update the blog if I get an answer. It is possible the officials stated this to the reporters but misspoke. In my time in the government it was not a rare event for senior officials to confuse details of operations or hear myths outside of the workplace and assume them to be true. Hopefully, I can find out more as that is a historically significant claim. Based on what is known currently I am skeptical that outages following the initial Iraq invasion in 2003 were due to a cyber attack.

A Collection of Resources for Getting Started in ICS/SCADA Cybersecurity

August 28, 2016

*Last Updated Jan 2023*

I commonly get asked by folks what approach they should take to get started in industrial control system (ICS) or Operational Technology (OT) cybersecurity. Sometimes these individuals have backgrounds in control systems, sometimes they have backgrounds in security, and sometimes they are completely new to both. I have made this blog for the purpose of documenting my thoughts on some good resources out there to pass off to people interested. Do not attempt to do everything at once but it’s a good collection to refer back to in an effort to polish up skills or learn a new industry. There are also many skills that may not immediately be relevant to your job but I believe these topics all work together (ranging from analysis of threats to understanding the physical process of a gas turbine).  Rest assured, no matter how ill prepared you might feel in getting started realize that by having the passion to ask the question and start down the path you are already steps ahead of most. We need passionate people in the industry; everything else can be taught.

General Thoughts:

IT and OT/ICS cybersecurity can be very different. There’s definitely transferable skills between both fields though. Often times folks look at ICS cybersecurity and think it’s different because there are legacy systems, different network protocols, and purpose built systems like programmable logic controllers (PLCs). While those are all true in reality the biggest difference is the mission function of the systems. There are unique purposes of the systems, unique impacts in failure, unique risks, and unique threats – so applying the same cybersecurity practices meant for a different environment, with different impacts, against different risks and threats seems counter intuitive. A broad generalization that can help understand this is that in IT cybersecurity there is a large focus on the system and data. We put a lot of protection and focus on the system (patching, EDR, passwords, application whitelisting, etc.) because if an adversary gets on a system, escalates privileges, etc. it’s a bad day. We also put a lot of focus in IT cybersecurity on data (encryption in transit, encryption at rest, data loss prevention, etc.) especially with the need to protect data, people’s personal information, credit cards and financial transactions, etc. But in ICS cybersecurity it’s more systems of systems and physics. Sure we care about some data and some systems. But in reality it’s more about an adversary’s ability to take System 1 and manipulate System 2 to cause a physical manifestation in System 3. As an example, an adversary that knows how to take access an Engineering Workstation to reprogram the logic on a PLC to cause an over pressurization event in a pipeline is going to be very dangerous whether or not they use vulnerabilities, exploits, and malware to do it or just native functionality and expertise. And physics is what we care about for what is technically possible or not possible on those systems in the first place with a large focus on ensuring safety and reliability of both people, the environment, and the operations.

I would advise any new person starting in the field to spend time really focusing on the “mission” first. I.e. what is it that the plant or site is trying to accomplish. What are they in business or production for? Then apply the cybersecurity that makes sense against the risks that actually impact the mission. Coming at the problem with what “right security” looks like before understanding the business and the mission purpose will lead you astray quickly. But if you understand the point of what the operations folks are trying to accomplish it’ll allow you to be a valuable partner.

Optional Pre-Reqs

It’s always good to pick up a few skills regarding the fundamentals of computers, networks, and systems in general. I would recommend trying to pick up a scripting language as well; even if you don’t find yourself scripting a lot understanding how scripting works will add a lot of value to your skill set.

  • Learn Python the Hard Way
    • Learn Python the Hard Way is a great free online resource to teach you, step-by-step, the Python scripting languages. There’s a lot of different opinions about different scripting language. In truth, most of them have value in different situations so I’ll leave it to you to pick your own language (and I won’t tell you that you’re wrong for not learning Python, even though you are). Another good programming resource is Code Academy.
  • MIT Introduction to Computer Programming
    • MIT’s open courseware is a treasure for the community. It shocks me how many people do not take advantage of free college classes from top universities. This is the Introduction to Computer Science and Programming course. It should be taken at a slow pace but it’ll give you a lot of fundamental skills.
  • MIT Introduction to Electrical Engineering and Computer Science
    • Another MIT open course but this time focused on electrical engineering. This is a skill that will help you understand numerous types of control systems better as well as have a better grasp on how computers work.
  • Microsoft Virtual Academy
    • Microsoft Virtual Academy can be found at various locations on YouTube. I have linked to the first one; I would recommend browsing through the topic list for everything from fundamentals of networking, to fundamentals of computers, to how the Internet works.

Intro to Control Systems

Control systems run the world around us. Escalators, elevators, types of medical equipment, steering in our cars, and building automation systems are types of control systems you interact with daily. Industrial control systems (ICS) are industrial versions of control systems found in locations such as oil drilling, gas pipelines, power grids, water utilities, petrochemical facilities, and more. This section will go over some useful resources and videos to learn more about industrial control systems and ultimately “the mission” of some of the sites. If you know how a waste water treatment facility process works as an example you’re then more capable to understand the instrumentation and automation around it and the cybersecurity that would be relevant to that site.

  • The PLC Professor
    • PLC Professor and his website plcprofessor.com contains a lot of great resources for learning what programmable logic controllers (PLCs) and other types of control systems and their logic are and how they work. Some resources are free while others are paid. At some point, getting a physical kit as a trainer to learn on is going to be a requirement.
  • Control System Basics
    • This is a great video explaining control system basics including the type of logic these systems use to sense and create physical changes to take action upon.
  • What is SCADA?
    • You’ve no doubt heard the term SCADA, if you haven’t you will. It stands for Supervisory Control and Data Acquisition and is a type of ICS. This video is a nice basic approach to explaining SCADA.
  • Department of Energy – Energy 101
    • The Department of Energy has a series of Energy 101 videos to explain basic concepts of different types of energy generation, sources, etc. It’s a fantastic series that should excite you about the field while explaining key terms and concepts.
  • Wastewater Treatment Explanation Video
    • We all need wastewater treatment facilities and learning about them helps you understand how control systems work and just how complex simple tasks in life can be (if we didn’t have control systems). These types of videos are important for you to watch and learn so that you get exposed to different industries. ICS is not really a community, it’s a collection of communities.
  • Waste Water – Flush to Finish
    • Another good wastewater explanation video.
  • Refinery Crude Oil Process
    • This is a video explaining a refinery crude oil process. If these types of videos don’t excite you to some extent you may be in the wrong career field. The world around us is magnificent and learning different industries will start to help you ask the right questions which will lead to your education on the subject.
  • Natural Gas Processing
    • This is an older video (the industry has definitely become more advanced than represented here) but extremely interesting on how natural gas is harvested, processed, and transferred. Think about all the control systems that have to go into this seeminly simple process.
  • How a Compressor Station Works
    • One particularly interesting (and historically difficult to secure) portion of the ICS community is the natural gas pipeline. This video talks about natural gas to some extent but really focused on compressor stations. Compressor stations as remote sites offer numerous opportunities and challenges to defenders. In short – they’re pretty cool.
  • Chemical Engineering YouTube Channel
    • A great series of videos explaining and showing different components of chemical processing.
  • Steel from Start to Finish
    • This is an example of how steel is made. The video, like the others in this section shows an important process that can help you understand all that goes into control system security. It’s important to know the real world impacts and applications of the processes we are trying to defend to fully understand how important safety and reliability are as the main component of industrial automation.
  • How It’s Made: Uranium Part 1 and Part 2
    • Uranium mining is especially important for the nuclear power industry. There’s a lot of misconceptions around uranium and its mining; many aspects of this type of mining are similar to other types of mining but the purification, transportation, manufacturing, and utilization of uranium (highlighted in part 2 of the videos above) are particularly interesting and unique. There’s an amazing amount of industrial control systems involved in these processes.
  • Uranium Mining
    • There are multiple ways to perform uranium mining, here is an alternative way with a video by the Nuclear Energy Institute.
  • Nuclear Reactor Explained
    • This is a simplistic but extremely easy to digest explanation and animation of a nuclear reactor. Nuclear energy has a bad rap due to pop culture but is a highly clean and safe form of energy. It’s really useful to understand this process and how these systems are designed and, ideally, isolated.
  • Nuclear Power Station
    • Building from the last video, here’s another video diving deeper into nuclear power. What you should focus on here is the design and engineering that go into the safety systems. Safety systems can be bypassed, there are no ‘unhackable’ things, but this helps you to understand just how these systems are designed to be safe by default even if not build with security in mind. The Fukushima event can be observed as a worst case and extremely unlikely scenario. Learning from it will be important; here you’ll find a good video on it.
  • Thermal Power Plant
    • There are many ways to generate power; this video explains thermal power and the complexity of the environment.
  • SCADA Utility 101
    • Rusty Williams has just the right type of southern speaking which makes an audience want to learn more. The guy is awesome, the video explains SCADA from an electric utility perspective, and this is a much watch.
  • Electric Generation and Transmission
    • Didn’t get enough of Rusty? Here’s another video of him explaining the generation and transmission of electricity.
  • Copper Mining
    • There are many differences in mining depending on what you are mining, but much of the fundamentals of exploration, extraction, and processing is similar across numerous industries. This video on copper mining, skip to about 1:30 to get past the specific mine’s financials and marketing, gives a nice quick high level view of some of the process and equipment you’d find in the mining industry.
  • Gold Mining
    • Whereas the initial mining fundamentals can be the same, as noted there are many differences including how you achieve prospecting and how you process the extracted minerals. Gold mining has a number of interested aspects worth learning about.
  • Cyanidation for Extraction Processes (Animated Video and a Real Life Example)
    • Cyanide is mostly known for its form as hydrogen cyanide but in other forms (such as sodium, potassium, or calcium cyanide) it is useful in extracting precious minerals from ore and often used in gold processing. The videos above are quick animated and real life examples of the cyanidation process. The Wikipedia article here is also very useful.
  • Fundamentals of Manufacturing Processes
    • Manufacturing makes the world around us. The manufacturing industry is broad from auto, to food and beverage, to chemical, to pharmaceutical, and more. This is an MIT course that’s hosted online for free. It’s a 10 week course but it is fantastic and going through a wide variety of types of manufacturing.
  • Chemical Industry Process Equipment
    • This video is unlike the others in that it does not really show the full engineering process. However, the video talks through a wide variety of equipment that you would find in the chemical industry. I find this video useful to learn about a variety of equipment, much of which you could find in numerous industries. I would recommend taking terms you’re unfamiliar with and looking up Wikipedia articles for each after the video.
  • Beverage Manufacturing (Coca-Cola)
    • Here’s a great example of a manufacturing video focused on beverages, in this case Coca-Cola. The food and beverage industry and its manufacturing processes are wonderful forms of batch processing. This video is obviously a bit of a promotion as well but there’s great explanations throughout the video including how to make bottles (800 bottles a minute!), how to make cans, how to clean cans with sulfuric acid, and of course how to fill them with coke (1,700 cans per minute!).
  • Control Lectures
    • This is a fantastic series by Brian Douglas which covers a wide range of lectures on control systems in a very easy to process way.
  • Safety Systems
    • It’s good to get familiar with safety systems as well. Safety systems can either be active or passive. As an over simplification think of these as systems that take control of the system when an unsafe event occurs and helps to regulate it or shut it down safely. It can also be the product of good engineering instead of a dedicated system. Either way, there is a trend in the community to have integrated safety systems into one device; where the control device is also the safety device. This has cost savings but horrendous cyber security consequences and thus horrible safety consequences.
  • Safety Valves
    • Building on your understanding now of safety systems here’s an example of a safety valve in a process and how it can work to keep the operations, and more importantly the people around it, safe.
  • Industrial Disaster Explanation Videos
    • The U.S. Chemical Safety and Hazard Investigation Board has a number of videos explaining industrial disasters. This is an important resource to understand what can go wrong in industrial automation regardless of the cause (these are not cyber related but are important to understand as things that cyber could potentially cause if we are not careful). In IT, if things go wrong people do not generally die – in ICS death, injury, and environmental harm is a very real concern.

Intro to Computer and Network Security

There’s a lot of resources in the form of papers below (especially the SANS Reading Room) which are all great. However, you really need to get hands on so many of the resources are focused on tools and data sets. Try to read up as much as possible and then deeply dive into hands on learning.

  • The Sliding Scale of Cyber Security
    • I wrote this paper specifically to address the nebulous nature of “cyber security.” When people say they specialize in cyber security, what exactly does that mean? I put forth that there are 5 categories of investment that can be made. The prioritization for the value towards security should be towards the left hand side of the scale. It is ok to invest in multiple categories at once but understand the true return on investment you’re getting versus the cost.
  • VMWare
    • You’ll want to be able to set up Virtual machines (VMs) to get hands on with files and various security tools. VMWare is a great choice as is VirtualBox. VMWare has a free version you’ll want to use (Player). Don’t worry about getting Workstation or Player Pro until later when you are more experienced and want to save snapshots (copies of your VM to revert back to). Below you’ll find a sample video on VMs, feel free to Google around for better understanding.
  • Security Onion
    • You’re going to want to get hands on with the files presented in this guide; Security Onion is an amazing collection of free tools to do just that with a focus on network security monitoring and traffic analysis.
  • SANS’ SIFT
    • If you’re super cool you’ll want to get into forensics at some point; the SIFT VM from SANS is a collection of tools you’ll need to get started.
  • REMnux
    • Before you try out reverse engineering malware (REM) you’ll want to have a safe working environment to do so. This is not a beginner topic but at some point you’ll likely want to examine malware, Lenny’s REMnux VM is the safe place to do that.
  • Malware Traffic Analysis
    • Brad’s blog on malware traffic analysis is one of the best resources in the community. It combines sample files with his walk throughs of what they are and how to deal with them. You can learn a lot this way very quickly.
  • Open Security Training
    • This website is dedicated to open (free) security training. There are a number of qualified professionals who have dedicated time to teach things from the basics of security to advanced reverse engineering concept. You could spend quite a time on this website’s courses and all of them would make you more capable in this field. There are often full virtual machines (VMs), slides, and videos for the courses.
  • Sample PCAPs from NETRESEC
    • These packet capture samples are invaluable to learning how our systems interact on the network. Take a tool like Wireshark and analyze these files to get familiar with them and the practice (Wireshark will continually be your friend in any field you specialize in).
  • DEFCON Capture the Flag Files
    • DEFCON has made available their files (and often times walkthroughs) for their capture the flag contests. These range from beginner to advanced concepts in offensive security practices such as red teaming. Learning how to break into systems and how they fail is great for defense. It’s not required but it can be helpful.
  • Iron Geek
    • This is an invaluable collection of videos from conferences around the community. If you’re looking for a specific topic it’s a good idea to search these conference videos. Felt like you missed out on the last decade of security? Don’t worry most of its captured here.
  • SANS Reading Room
    • The SANS Institute is the largest and most trusted source of cyber security training. Their Reading Room is a free collection of papers written by students and instructors covering almost every topic in security.
  • Honeynet Project
    • Consider this a capstone exercise. Read up on honeypots and learn to deploy a honeypot such as Conpot. The idea is that to run a honeypot correctly you’ll have to learn about safeguarding your own infrastructure, setting up proxies and secure tunnels, managing cloud based infrastructure such as an EC2 server, performing traffic analysis on activity in the honeypot, malware analysis on discovered capabilities, and eventually incident response and digital forensics off of the data provided to explore the impact to the system. Working up to this point and then running a successful honeypot for any decent length of time really helps develop and test out a wide range of skills in the Architecture, Passive Defense, Active Defense, and (potentially in the form of Threat Intel) Intelligence categories of the Sliding Scale of Cyber Security.

Intro to Control System Cyber Security

Cybersecurity is not a new topic but in ICS it is mostly unexplored. The hardest part for most folks is learning who to listen to and what resources to read. There are a lot of “experts” out there who will quickly lead you astray; look at people’s resumes to see if they had the opportunity to do what they are speaking to you about. Because they don’t have experience doesn’t mean they are necessarily wrong but it’s an easy check. As an example, if someone calls themselves a “SCADA Security Guru” or something like a “thought leader” but they’ve only ever been a Chief Marketing Officer of an IT company, that should be a red flag. It is important to be very critical of information in this space but continually push forward to try to make the community better. Below are some trusted resources to help you on your journey.

  • S4’s ICS Onramp Series
    • Fantastic collection of quick hit videos from some of the best known folks in the industry walking through the things you need to know as an “onramp” experience for new folks to our community
  • An Abbreviated History of Automation and ICS Cybersecurity
    • This is a great SANS paper looking at the background on ICS cybersecurity. Well worth the read to make sure you understand many of the events that have occurred over the past twenty years and how they’ve inspired security in ICS today.
  • SANS ICS Library
    • This is the SANS ICS library which contains a number of posters and papers to get you started. Reference the blog as well for good explorations of topics. I write the Defense Use Case series as well which explores real and hyped up ICS attacks and lessons learned from them.
  • SCADAHacker Library
    • Joel has a fantastic collection of papers on ICS security, standards, protocols, systems, etc. Lots of valuable content in this collection.
  • The ICS Cyber Kill Chain
    • The attacks we are concerned most with on ICS take a different approach than traditional IT. This is a paper I wrote with Michael Assante exploring this and detailing the steps an adversary needs to take to accomplish their goals.
  • The Five ICS Cybersecurity Controls
    • There are a lot of cybersecurity controls that can be applied and many good standards and frameworks; however it can be overwhelming and unrealistic to try to do everything. Tim Conway and I wrote this paper after analyzing all the known ICS cyber threat groups and attacks with a focus on what would be the best strategy and approach for the “basics” of what every organization can do to be efficient and well prepared.
  • Analyzing Stuxnet (Windows Portion)
    • This is Bruce Dang’s talk at the 27th CCC in Germany on his exploration of analyzing Stuxnet. He was at Microsoft and was one of the first researchers to analyze it. This is a good understanding of the Windows portion of analysis. I show this video even though it’s a bit more advanced to highlight that there are often an IT and (operations technology) OT side of analysis.
  • Analyzing Stuxnet (ICS Portion)
    • Ralph Langer was responsible for deep diving into Stuxnet on it’s ICS payload portion. This talk gives a good understanding of the OT side of the analysis.
  • To Kill a Centrifuge – Stuxnet Analysis
    • This is Ralph Langer’s excellent paper exploring the technical details on the Stuxnet malware and most importantly the ICS specific payload and impact. It is a good idea to read through the paper and Google the terms in the paper you do not understand.
  • SANS ICS Defense Use Case #5 – Ukraine Power Grid Attack
    • This is a paper I wrote with Michael Assante, and Tim Conway released through the E-ISAC on our analysis of the Ukraine power grid attack in 2015. There are also recommendations for defense at each level of the ICS kill chain (applying 1 control is never enough to stop attacks).
  • CRASHOVERRIDE – Analysis of the Threat to Electric Grid Operations
    • Following the attack on Ukraine’s grid in 2015, there was an effort by the adversary to make their efforts more scalable with the added automation of malicious software. The malware leveraged in the Ukraine 2016 cyber attack (second ever cyber attack to cause loss of load in an electric system) was called CRASHOVERRIDE.
  • Anatomy of an Attack: Detecting and Defeating CRASHOVERRIDE
    • Much of the information around CRASHOVERRIDE wasn’t made immediately available to the sensitivity of what happened and the desire to not have the tradecraft proliferate. Once more information was being made known though Joe Slowik, an intelligence analyst at Dragos, published the findings behind the adversary ELECTRUM which was responsible for CRASHOVERRIDE.
  • TRISIS Malware: Analysis of Safety System Targeted Malware
    • In 2017 there was an attack on a Saudi Arabian petrochemical company. Dragos and FireEye completed analyses of the malware (FireEye called it TRITON and Dragos called it TRISIS; they did not coordinate with each other and did not know each firm was working on the malware analysis until a week or so before publication). Despite initial reporting in media by parties not involved in the analysis, Saudi Aramco was not the victim of the attack. Saudi Aramco was actually the incident response team that went and helped out the facility.
  • PIPEDREAM – The Most Flexible & Capable ICS Malware To Date
    • In 2022 PIPEDREAM was discovered as the first cross-industry scalable and repeatable ICS malware. In this talk I go through the capability and what we are allowed to say about it while focusing people on defensive strategies against it. PIPEDREAM was truly a game changer for the industry.
  • CHERNOVITE’s PIPEDREAM
    • This is the whitepaper by the Dragos team on their analysis of PIPEDREAM.
  • The Industrial Cyber Threat Landscape
    • This is the testimony I gave in 2018 to the Committee on Energy and Natural Resources of the United States Senate. It contains recommendations for the community and a discussion of the cyber threat landscape.
  • ICS Threat Intelligence: Moving from the Unknowns to a Defended Landscape
    • This is a talk I did at the SANS ICS Summit that gets into why our threat landscape is largely unknown, what we can do about it, and how we can really move the community forward by incorporating intelligence instead of theoretical best practices.
  • Perfect ICS Storm
    • Glenn wrote a great paper looking at the interconnectivity of ICS and the networks around them with considerations on how it impacts monitoring and viewing the control systems.
  • Network Security Monitoring in ICS 101
    • Here is a great intro talk on network security monitoring in an ICS by Chris Sistrunk at DEFCON 23. Network security monitoring is exceptionally useful in ICS because it can be done with minimal data sets and passively which works inside the confines of the safety and reliability requirements of an ICS network.
  • Dragos Webinars and Blogs
    • The Dragos webinars and blogs are highly informative on performing threat analysis, defense, and response as it pertains to ICS cyber threats. They are very rarely marketing or promotional and far more content-driven.
  • S4 Videos
    • The S4 conference run by Dale Peterson is a great community resource. He has posted a number of the conference presentations which will give you a great look at the ICS security community especially from the researcher perspective.
  • Defense Will Win
    • Dale Peterson’s excellent S4 talk that has an upbeat attitude of “defense will win.” This is something I completely agree with and for a few years now I have been championing the phrase “Defense is Doable” to help folks not get down when it comes to ICS cyber security. It may seem like the hardest challenge out there but it’s worthwhile and these are the most defensible environments on the planet; maybe not the most defended – but we will get there.
  • Dragos Year in Review 2017 and the following years
    • Each year Dragos puts out a year in review that covers threats, vulnerabilities, and lessons learned across incident response and assessments. They were made to provide ground-truth base metrics and stats to the community about what is going on around the community. They are light on marketing language and focused on sharing insights useful to the community.
  • The Industrial Cyberthreat Landscape
    • My keynote at RSA detailing to a broad audience what is unique about OT/ICS and the threats we face with the latest insights from the frontlines.

Recommended ICS Cybersecurity Books

  • Rise of the Machines: A Cybernetic History
    • It seems a bit odd to put a non-technical book as my first recommendation but I assure you it is with reason. Dr. Thomas Rid wrote this book to attempt to fully understand the history, implications, and usages of the word “cyber”. Delightfully, control systems have a major role throughout the book. It was control systems that got us started with “cybernetics” which is eventually where we would have the “cyber” word that fills our daily lives.
  • Handbook of SCADA/Control Systems Security
    • Robert (Bob) Radvanovsky and Jacob (Jake) Brodsky put together this wonderful collection of articles from people throughout the community. It covers a wide variety of topics from a wide variety of personalities and professionals.
  • Protecting Industrial Control Systems from Electronic Threats
    • Joe Weiss is a polarizing individual in the community but only because of how passionately he cares about the industry and how long he’s been in the community. Many of us here today in the community owe much to Joe and this book offered an early look at control system cybersecurity.
  • Industrial Network Security
    • Eric Knapp and Joel Langill wrote this book looking specifically at the network security side of ICS. It’s a fantastic resource exploring different technologies and protocols by two professionals I’m glad to call peers and friends.
  • Hacking Exposed: Industrial Control Systems
    • This book takes a penetration testing focus on ICS and talks about how to test and assess these systems from the cybersecurity angle while doing it safely and within bounds of acceptable use inside of an ICS. It’s written by Clint Bodungen, Bryan Singer, Aaron Shbeeb, Kyle Wilhoit, and Stephen Hilt who all are trusted professionals in the industry.
  • Santa and Me: The SCADA Before Christmas
    • The third book I’ve written which is a lighthearted twist on the classic “twas the night before Christmas” poem. You’re not going to learn a ton but for those of you with children it’s a great way to expose them to our industrial world.
  • Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon
    • Kim Zetter wrote this masterpiece on Stuxnet and much of the geopolitical and historical context around it as well as the investigation into it.
  • Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers
    • Andy Greenberg wrote a fantastic book looking at the Sandworm threat group which was responsible for the 2015 cyber attack on Ukraine’s electric system; this was the first time ever a cyber attack caused electric outages.

Recommended Professional Training

You in no way need certifications or professional training to become great in this field. However, sometimes both can help either for job opportunities, getting a raise, or polishing up some skills you’ve developed. I highly encourage you to learn as much as you can before getting into a professional class (the more you know going in the more you’ll take away) and I encourage you to try to find an employer to pay your way (they aren’t cheap). If your employer doesn’t have a training policy it’s a good time to try and find a new employer. Here are two professional classes I like for ICS cyber security training (I’m biased because I teach at SANS but I teach there because I believe in what they provide).

  • Department of Homeland Security and Department of Energy Training
    • The ICS-CERT and Idaho National Labs provide a variety of online and in person training. One of the most well known is the ICS 301 class which is a 5-day introduction to ICS hosted in Idaho Falls, Idaho. It is a free course and highly recommended.
  • SANS ICS 410 – ICS/SCADA Essentials
    • This class is designed to be a bridge course; if you are an ICS person who wants to learn security, or a security person who wants to learn ICS, this course offers the bridge between those two career fields and offers you an introduction into ICS cyber security. Over the years this course has become of staple of people entering our community.
  • SANS ICS 515 – ICS/SCADA Active Defense and Incident Response
    • This is the class I authored at SANS teaching folks about targeted threats (such as state adversaries or well funded crime groups) that impact ICS and how to hunt them in your environment and respond to incidents. More than just focusing on the threats though this class helps you understand the risks our community faces and how to develop strategies against them with hands on practitioner focused labs and training.
  • SANS ICS 612 – ICS Cybersecurity In Depth
    • An absolute gem of a class that teaches a ton across foundational ICS security, architecture, passive defense, etc. topics. I say foundational not because it’s entry level but because it should be required for anyone joining the field. It’s a hands on class with a full control system setup and is most students’ best opportunity to get hands on with real industrial equipment and processes.
  • Assessing and Exploiting Control Systems
    • Justin Searle is the author of SANS ICS410 and he also made Assessing and Exploiting Control Systems. This course is an introduction to vulnerability and penetration testing of these systems with a focus on everything from PLCs to RF. A lot of the focus tends to be on smart grid and electric but there are elements for everyone. The same class is also hosted at SANS from time to time, but it is significantly cheaper to find it at BlackHat if you can grab a spot. The class moves around so the link above is for an old class but Google the name and where it’s being hosted to find it.
  • Dragos 5 Day Training
    • Dragos hosts a five day training that covers an introduction to ICS, assessing ICS, threat hunting, and security monitoring. Uniquely, it provides access to industrial ranges and is hosted in Houston, Texas, Hanover, Maryland, Dubai, UAE, and Melbourne, Australia. The industrial ranges and physical equipment make for an exciting educational experience. However, the class is only open to those in the asset owner and operator community (e.g. working at an energy, manufacturing, auto, etc. company) such as Dragos customers and partners. Most of the other training in the market tries to avoid vendor tools and practices to be vendor-neutral. I love this and engage this way even in my own SANS class. However, the reality is in your day-to-day work you’re going to be working with vendor tools and want to learn from their best practices too. This is a unique opportunity to train with those operating on the front lines of the community and understand their specific approaches.

Recommended Conferences

No matter how much time you spend reading or practicing eventually you need to become part of the community. Contributions in the form of research, writing, and tools are always appreciated. Contributions in the form of conference presentations are especially helpful as they introduce you to other interested folks. The ICS cybersecurity community is an important one on many levels. It’s one of the best communities out there with hard working and passionate people who care about making the world a safer place. Below are what I consider the big 5. These conferences are the ones that are general ICS cyber security (not a specific industry such as API for oil and gas or GridSecCon for electric sector) although those are valuable as well.

  • SANS ICS Security Summit
    • For over fifteen years the SANS ICS Security Summit has been a leading conference on bringing together researchers, industry professionals, and government audiences. The page above links to the various SANS ICS events but look for the one that says “ICS Security Summit” each year. It is usually held at Disney World in Orlando Florida. Its strong suit is the educational and training aspects not only because of the classes but also because of the strong industry focus.
  • DigitalBond’s S4
    • The S4 conference is a powerhouse of leading ICS security research. Dale puts on a fantastic conference every year (now with a European and Japanese venue as well each year) that brings together some of the most cutting edge research and ideas. S4 in the US is often held in January in Florida.
  • The ICS Cyber Security Conference (WeissCon)
    • Affectionately known as WeissCon after its founder Joe Weiss, the conference is now owned and operated by SecurityWeek and usually runs in October at different locations each year in the US (Georgia is usually a central location for the conference though). The conference brings together a portion of the community not often found at the other locations and has a strong buy-in from the government community as well as the vendor community.
  • The ICS Joint Working Group (ICSJWG)
    • The ICSJWG is a free conference held twice a year by the Department of Homeland Security. I often encourage people to go to the ICSJWG conference first as a type of intro into the community, to then go to the SANS ICS Security Summit for more view into the asset owner community and to get training, then go to S4 for the latest research, to go to WeissCon to see some of the portions of the community and vendor audience not represented elsewhere, and finally to CS3Sthlm to get an international view. It is perfectly ok to go to all five of the big conferences a year (I do) but if you need a general path that is the one I would follow initially.
  • CS3Sthlm
    • CS3Sthlm used to be known as 4SICS and is held every year in Stockholm, Sweden. It is one of the leading ICS security conferences in the world (I consider it one of the “big five”) and it is in my opinion the best ICS security conference in Europe. The founders Erik and Robert are some of the friendliest people in the ICS community and have a wealth of experience to share with folks from decades defending infrastructure.
  • Dragos Industrial Security Conference (DISC)
    • DISC is the Dragos annual conference however it is unique in that it is entirely dedicated to research and insights into the ICS cyber threats and responding to them. The conference is 100% free and open to those in the industrial asset owner and operator community. It happens every year on November 5th in Maryland, USA.

This is just a small collection of a lot of the fantastic resources out there. Always fight to be part of the community and interact – that is where the real value in learning is. Never wait to have someone show you though, even the “experts” are usually only expert in a few things. It is up to you to teach yourself and involve yourself. We as a community are waiting open armed.

 

Intelligence Defined and its Impact on Cyber Threat Intelligence

August 25, 2016

Michael Cloppert wrote a great piece to argue for a new definition of cyber threat intelligence. The blog is extremely well written (I personally love the academic style and citations) and puts forth a good discussion on operations. Sergio Caltagirone published a rebuttal equally valuable where he agreed with Mike that there is accuracy missing from current cyber threat intelligence definitions but noted that Mike focused too much on operations. The purpose of this blog is not to rebut their findings but to add to the conversation. In many aspects I agree with both Mike and Sergio; I would highlight that the forms of intelligence discussed though are very policy focused (sometimes even military focused) and influence how we define cyber threat intelligence. I do not envision that between these three blogs we’ve settled a long standing debate on intelligence but the intent is to add to the discussion and encourage thoughts by others.

In Mike’s piece the definition he presented for the field of cyber threat intelligence is the “union of cyber threat intelligence operations and analysis” each of which he previously defined. Sergio responded by stating “Intelligence doesn’t serve operations, intelligence serves decision-making which in turn drives operations to achieve policy outcomes.” I agree with this understanding of intelligence to meet policy needs and while Sergio intentionally does not intend to cover all aspects of intelligence outside of policy I believe it is important to consider. Mike teased out at one point that “…’intelligence’ more broadly is a bias toward a particular type of intelligence, and they continue to overwhelmingly focus on geopolitical outcomes.” He gives an example of business intelligence as another form of intelligence and accepts that the basis of intelligence is interpreted information with an assessment to advance an interest. This is where he stops though in an effort to stay focused on defining cyber threat intelligence. This is where I would like to begin.

Dr. Michael S. Goodman, a professor of intelligence studies at Kings College in London, wrote a piece for the CIA’s Center for the Study of Intelligence where he discussed the challenges and benefits in studying and teaching intelligence. He specifically noted that “The academic study of intelligence is a new phenomenon” although the field of intelligence itself is very old. More relevantly to this blog post he wrote that “Producing an exact definition of intelligence is a much-debated topic.” In a non-government intelligence focused piece the University of Oregon has a page dedicated to the theories and definitions of intelligence. There, they cite psychologists and educators Howard Gardner, David Perkins, and Robert Sternberg to assign attributes to intelligence and state that it is a combination of the ability to:

  • Learn
  • Pose Problems
  • Solve Problems

These three attributes are core to any definition of intelligence whether it’s business intelligence, emotional intelligence, or military intelligence. Additionally, the distinctly human component of this process, for those of you considering artificial intelligence as you read this, is harder to capture but likely exists in the ability to pose and solve problems. Machines can pose and solve problems to an extent but how they do that sets them apart from humans. More to the point, how each of us pose and solve problems is influenced at some level by bias. That bias is often an influence analysts seek to minimize so that it does not jade how we analyze problems and the answers we derive. However, that bias in how we pose and solve problems is likely the only distinctly human component of intelligence. That is a discussion for a longer future piece though.

Further in the University of Oregon piece, different types of intelligences are listed from Gardner, Perkins, and Sternbeg. A few are listed below:

  • Linguistic
  • Intrapersonal
  • Spatial
  • Practical
  • Experiential
  • Neural
  • Reflective

These different types of intelligence are not all encompassing and focus on the psychological more than classic government intelligence. However, they offer a more robust view into what it means to be able to process and analyze information which is in of itself core to cyber threat intelligence. I gravitate more towards Robert Sternberg’s understanding of intelligence and specifically his view of experiential and componential intelligence. According to his 1988 and 1997 writings on intelligence experiential intelligence is “the ability to deal with novel situations; the ability to effectively automate ways of dealing with novel situations so they are easily handled in the future; the ability to think in novel ways.” His understanding of componential intelligence is “the ability to process information effectively. This includes metacognitive, executive, performance, and knowledge-acquisition components that help to steer cognitive processes.”

I enjoy these two the most because they seem to map the closest to the idea of intelligence generation and intelligence consumption. In the field of cyber threat intelligence we often hear vendors, security researchers, and companies talk about “threat intel” and standing up teams to do intel-y things but without specific guidance. There is a stark difference in generating intelligence and in consuming it. Most companies are looking for threat intelligence consumption teams (those that can map their organization’s requirements and search for what is available to help drive defense) not threat intelligence generation teams (those individuals who analyze adversary information to extract knowledge which may or may not be immediately useful). A good team is usually the mix of both but with a clear understanding of which one is the priority and which effort is the goal at any given time. Sternberg’s experiential intelligence speaks more to threat intelligence generation whereas his componential intelligence addresses the ability to process, or consume, intelligence. The definitions are not as simple as this but it is thought provoking.

In reviewing Mike and Sergio’s excellent blog posts with the addition of a wider view on intelligence both from a classical, psychological, and philosophical aspect there are attributes that emerge. These attributes mean that intelligence:

  • Must be analyzed information
    • To perform analysis is a distinctly human trait likely due to our influence of bias and our efforts to minimize it (i.e. no $Vendor your tool does not create intelligence) meaning that it is always up to our interpretation and others may have other valuable and even competing interpretations
  • Must meet a requirement
    • Requirements can be wide ranging such as policy, military operations, geo-political, business, friendly forces movements and tactics, or self-awareness; the lack of a requirement would result in intelligence not being useful and by that extension be an inhibitor to intelligence (i.e. overloading analysts with indicators of compromise is not intelligence)
  • Must respect various forms
    • There is no one definition of intelligence but each definition must allow for different ways of interpreting, processing, and using the intelligence

To further qualify to be threat intelligence the presented intelligence must be about threats; threats are not only geo-political in nature but also may encompass insiders. However, I disagree with the notion that there is an unwitting insider threat because the definition of threat I subscribe to must have the following three attributes:

  • Opportunity
    • There must be the ability to do harm. In many organizations this means knowing your systems, people, vulnerabilities, etc.
  • Intent
    • There must be an intention to do harm, if it is unintentional the harm is still as impactful but it cannot be properly classified as a threat. Understanding adversary intention is difficult but this is where analysis comes in understanding the threat landscape
  • Capability
    • The adversary must have some capability to do you harm. This may be malware, it may be PowerShell left running in your environment, and it could be non-technical such as the means to influence public perception through leaked documents

Therefore, I use the following definition, heavily inspired by classic definitions, for intelligence: “The process and product resulting from the interpretation of raw data into information that meets a requirement.” The product may be knowledge, it may be a report, it could be tradecraft of an adversary, etc. Further, I use the following definition for cyber threat intelligence “The process and product resulting from the interpretation of raw data into information that meets a requirement as it relates to the adversaries that have the intent, opportunity and capability to do harm.” (Note that in this definition of cyber threat intelligence the adversary is distinctly human. Malware isn’t the threat; the human or organization of humans intending you harm is the threat.) Each definition is concise but open-ended enough to serve multiple purposes beyond military intelligence.

I in no way think that this solves any aspect of this debate. And I do not feel that my definitions actually conflict with what Mike and Sergio have put forward but are instead meant simply as an extension of the topic. Mike and Sergio are both extremely competent individuals that I am privileged to call my friends, peers, and over numerous occasions mentors. However, their blogs inspired me to explore the topic for myself and this blog was simply my way to share my opining on my findings. I hope it has been useful in some manner to your own exploration.

Common Analyst Mistakes and Claims of Energy Company Targeting Malware

July 13, 2016

A new blog post by SentinelOne made an interesting claim recently regarding a “sophisticated malware campaign specifically targeting at least one European energy company.”  More extraordinary though was the claim by the company that this find might indicate something much more serious: “which could either work to extract data or insert the malware to potentially shut down an energy grid.” While that is a major analytical leap, we’ll come back to this, the next thing to occur was fairly predictable – media firms spinning up about a potential nation-state cyber attack on power grids.

I have often critiqued news organizations in their coverage of ICS/SCADA security when there was a lack of understanding of the infrastructure and its threats but this sample of hype originated from SentinelOne’s bold claims and not the media organizations. (Although I would have liked to see the journalists validate their stories more). News headlines included “Researchers Found a Hacking Tool that Targets Energy Grids on the Dark Web” to EWeek’s “Furtim’s Parent, Stuxnet-like Malware, Aimed at Energy Firms.” It’s always interesting to see how long it takes for an organization to compare malware to Stuxnet. This one seems to have won the race in terms of “time-to-Stuxnet”, but the worst headline was probably The Register’s with “SCADA malware caught infecting European energy company: Nation-state fingered”. No this is not SCADA malware and no nation-states have been fingered (phrasing?).

The malware is actually not new though and had been detected before the company’s blog post. The specific sample SentinelOne linked to, that they claim to have found, was first submitted to VirusTotal by an organization in Canada on April 21st, 2016. Later, a similar sample was identified and posted on the forum KernelMode.info on April 25th, 2016 (credit to John Franolich for bringing it to my attention). On May 23rd, 2016 a KernelMode forum user posted on their blog some great analysis of the malware. The KernelMode users and blogger identified that one of the malware author’s command and control servers was misconfigured and revealed a distinct naming convention in the directories that very clearly seemed to correlate to infected targets. In total there were over 15,000 infected hosts around the world that had communicated to this command and control server. This puts a completely different perspective on the malware that SentinelOne claimed was specifically targeting an energy company and it’s obvious it is most certainly not ICS/SCADA or energy company specific. It’s possible energy companies are a target, but so far there’s no proof of that provided.

I do not have access to the dataset that SentinelOne has so I cannot and will not critique them on all of their claims. However, I do find a lot of the details they have presented odd and I also do not understand their claims that they “validated this malware campaign against SentinelOne [their product] and confirmed the steps outlined below [the malware analysis they showed in their blog] were detected by our Dynamic Behavior Tracking (DBT) engine.” I’m all for vendors showcasing where their products add value but I’m not sure how their product fits into something that was submitted to VirusTotal and a user forum months before their blog post. Either way, let’s focus on the learning opportunities here to help educate folks on potential mistakes to avoid.

Common Analyst Mistake: Malware Uniqueness

A common analyst mistake is to look at a dataset and believe that malware that is unique in their dataset is actually unique. In this scenario, it is entirely possible that with no ill-intention whatsoever SentinelOne identified a sample of the malware independent from the VirusTotal and user forum submission. Looking at this sample and not having seen it before the analysts at the company may have made the assumption that the malware was unique and thus warranted their statement that this campaign was specifically targeting an energy company. The problem is, as analysts we always work off of incomplete datasets. All intelligence analysis operates from the assumption that there is some data missing or some unknowns that may change a hypothesis later on. This is one reason you will often find intelligence professionals give assessments (high, medium, or low confidence assessments usually) rather than making definitive statements. It is important to try to realize the limits of our datasets and information by looking to open source datasets (such as searching on Google to find the previous KernelMode forum post in this scenario) or establishing trust relationships with peers and organizations to share threat information. In this scenario the malware was not unique and determining that there were at least 15,000 victims in this campaign would add doubt that a specific energy company was the target of the campaign. Simply put, more data and information was needed.

Common Analyst Mistake: Assuming Adversary Intent

As analysts we often get familiar with adversary campaigns and capabilities to an almost intimate level knowing details ranging from behavioral TTPs to the way that adversaries run their operations. But one thing we as analysts must be careful of is assuming an adversary’s intent. Code, indicators, TTPs, capabilities, etc. can reveal a lot. They can reveal what an adversary may be capable of doing and they should reveal the potential impact to a targeted organization. It is far more difficult though to determine what an adversary wishes to do. If an adversary crashes a server an analyst may believe the malicious actor wanted to deny service to it whereas the actor just messed up. In this scenario the SentinelOne post stopped short of claiming to know what the actors were trying to do (I’ll get to the power grid claims in a following section) but the claim that the adversary specifically targeted the European energy company is not supported anywhere in their analysis. They do a great job of showing malware analysis but do not offer any details around the target nor how the malware was delivered. Sometimes, malware infects networks that are not even the adversary’s target. Assuming the intent of the adversary to be inside specific networks or to take specific actions is a risky move and even worse with little to no evidence.

Common Analyst Mistake: Assuming “Advanced” Means “Nation-State”

It is natural to look at something we have not seen before in terms of tradecraft and tools and assume it is “advanced.” It’s a perspective issue based on what the analyst has seen before. It can lead to analysts assuming that something particularly cool must be so advanced that it’s a nation-state espionage operation. In this scenario, the SentinelOne blog authors make that claim. Confusingly though, they do not seem to have even found the malware on the energy company’s network they referenced. Instead, the SentinelOne blog authors claimed to have found the malware on the “dark web”. This means that there would not have been accompanying incident response data or security operations data to support a full understanding of this intrusion against the target, if we assume the company was a target. There are non-nation-states that run operations against organizations. HackingTeam was a perfect example of a hackers-for-hire organization that ran very well-funded operations. SentinelOne presents some interesting data and along with other data sets this could reveal a larger campaign or even potentially a nation-state operation – but nothing presented so far supports that conclusion right now. A single intrusion does not make a campaign and espionage type activity with “advanced” capabilities does not guarantee the actors work for a nation-state.

Common Analyst Mistake: Extending Expertise

When analysts become experts on their team in a given area it is common for folks to look to them as experts in a number of other areas as well. As analysts it’s useful to not only continually develop our professional skills but to challenge ourselves to learn the limits of our expertise. This can be very difficult when others look to us for advice on any given subject. But being the smartest person in the room on a given subject does not mean that we are experts on it or even have a clue of what we’re talking about. In this scenario, I have no doubt that the SentinelOne blog authors are very qualified in malware analysis. I do however severely question if they have any experience at all with industrial and energy networks. The claim that the malware could be used to “shut down an energy grid” shows a complete lack of understanding of energy infrastructure as well as a major analytical leap based on a very limited data set that is quite frankly inexcusable. I do not mean to be harsh, but this is hype at its finest. At the end of their blog the authors note that if anyone in the energy sector would like to learn more that they can contact the blog authors directly. If anyone decides to take them up on the offer, please do not assume any expertise in that area, be critical in your questions, and realize that this blog post reads like a marketing pitch.

Closing Thoughts

My goal in this blog post was not to critique SentinelOne’s analysis too much, although to be honest I am a bit stunned by the opening statement regarding energy grids. Instead, it was to take an opportunity to identify some common analyst mistakes that we all can make. It is always useful to identify reports like these and without malice to tear apart the analysis presented to identify knowledge gaps, assumptions, biases, and analyst mistakes. Going through this process can help make you a better analyst. In fairness though, the only reason I know a lot about common analyst mistakes is because I’ve made a lot of rookie mistakes at one point or another in my career. We all do. The trick is usually to try not to make a public spectacle out of it.