Analytical Leaps and Wild Speculation in Recent Reports of Industrial Cyber Attacks

December 31, 2016

“Judgement is what analysts use to fill gaps in their knowledge. It entails going beyond the available information and is the principal means of coping with uncertainty. It always involves an analytical leap, from the known into the uncertain.”

– Chapter 4, Psychology of Intelligence Analysis, Richards J. Heuer.


Analytical leaps, as Richards J. Heuer said in his must-read book Psychology of Intelligence Analysis, are part of the process for analysts. Sometimes though these analytical leaps can be dangerous, especially when they are biased, misinformed, presented in a misleading way, or otherwise just not made using sound analytical processes. Analytical leaps should be backed by evidence or at a minimum should include evidence leading up to the analytical leap. Unfortunately, as multiple analytical leaps are made in series it can lead to entirely wrong conclusions and wild speculation. There have been three interesting stories relating to industrial attacks this December as we try to close out 2016 that are worth exploring in this topic. It is my hope that looking at these three cases will help everyone be a bit more critical of information before alarmism sets in.

The three cases that will be explored are:

  • IBM Managed Services’ claim of “Attacks Targeting Industrial Control Systems (ICS) Up 110%”
  • CyberX’s claim that “New Killdisk Malware Brings Ransomware Into Industrial Domain”
  • The Washington Post’s claim that “Russian Operation Hacked a Vermont Utility, Showing Risk to U.S. Electrical Grid Security, officials say”


“Attacks Targeting Industrial Control Systems (ICS) Up 110%”

I’m always skeptical of metrics that have no immediately present quantification. As an example, the IBM Managed Security Services posted an article stating that “attacks targeting industrial control systems increased over 110 percent in 2016 over last year’s numbers as of Nov. 30.” But there is no data in the article to quantify what that means. Is 110% increase an increase from 10 attacks to 21 attacks? Or is it 100 attacks increased to 210 attacks?

The only way to understand what that percentage means is to leave this report and go download the IBM report from last year and read through it (never make your reader jump through extra hoops to get information that is your headline). In their 2015 report IBM states that there were around 1,300 attacks in 2015 (Figure 1). This would mean that in 2016 IBM is reporting they saw around 2,700 ICS attacks.


Figure 1: Figure from IBM’s 2015 Report on ICS Attacks


However, there are a few questions that linger. First, this is a considerable jump from what they were tracking previously and from their 2014 metrics. IBM states that the “spike in ICS traffic was related to SCADA brute-force attacks, which use automation to guess default or weak passwords.” This is an analytical leap that they make based on what they’ve observed. But, it would be nice to know if anything else has changed as well. Did they bring up more sensors, have more customers, increase staffing, etc. as the stated reason for the increase would not alone be responsible.

Second, how is IBM defining an attack. Attacks in industrial contexts have very specific meaning – an attempt to brute-force a password simply wouldn’t qualify. They also note that a pentesting tool on GitHub was released in Jan 2016 that could be used against the ICS protocol Modbus. IBM states that the increase in metrics was likely related to this tools’ release. It’s speculation though as they do not give any evidence to support their claim. However, it leads to my next point.

Third, is this customer data or is this honeypot data? If it’s customer data is it from the ICS or simply the business networks of industrial companies? And if it’s honeypot data it would be good to separate that data out as it’s often been abused to misreport “SCADA attack” metrics. From looking at the discussion of brute-force logins and a pentesting tool for a serial protocol released on GitHub, my speculation is that this is referring mostly to honeypot data. Honeypots can be useful but must be used in specific ways when discussing industrial environments and should not be lumped into “attack” data from customer networks.

The article also makes another analytical leap when it states “The U.S. was also the largest target of ICS-based attacks in 2016, primarily because, once again, it has a larger ICS presence than any other country at this time.” The leap does not seem informed by anything other than the hypothesis that the US has more ICS. Also, again there is no quantification. As an example, where is this claim coming from, how much larger is the ICS presence than other countries, and are the quantity of attacks proportional to the US ICS footprint when compared to other nations’ quantity of industrial systems? I would again speculate that what they are observing has far more to do with where they are collecting data (how many sensors do they have in the US compared to China as an example).

In closing out the article IBM cites three “notable recent ICS attacks.” The three case studies chosen were the SFG malware that targeted an energy company, the New York dam, and the Ukrainian power outage. While the Ukrainian power outage is good to highlight (although they don’t actually highlight the ICS portion of the attack), the other two cases are poor choices. As an example, the SFG malware targeting an energy company is something that was already debunked publicly and would have been easy to find prior to creating this article. The New York dam was also something that was largely hyped by media and was publicly downplayed as well. More worrisome is that the way IBM framed the New York dam “attack” is incorrect. They state: “attackers compromised the dam’s command and control system in 2013 using a cellular modem.” Except, it wasn’t the dam’s command and control system it was a single read-only human machine interface (HMI) watching the water level of the dam. The dam had a manual control system (i.e. you had to crank it to open it).

Or more simply put: the IBM team is likely doing great work and likely has people who understand ICS…you just wouldn’t get that impression from reading this article. The information is largely inaccurate, there is no quantification to their numbers, and their analytical leaps are unsupported with some obvious lingering questions as to the source of the data.


“New Killdisk Malware Brings Ransomware Into Industrial Domain”

CyberX released a blog noting that they have “uncovered new evidence that the KillDisk disk-wiping malware previously used in the cyberattacks against the Ukrainian power grid has now evolved into ransomware.” This is a cool find by the CyberX team but they don’t release digital hashes or any technical details that could be used to help validate the find. However, the find isn’t actually new (I’m a bit confused as to why CyberX states they uncovered this new evidence when they cite in their blog an ESET article with the same discovery from weeks earlier. I imagine they found an additional strain but they don’t clarify that). ESET had disclosed the new variant of KillDisk being used by a group they are calling the TeleBots gang and noted they found it being used against financial networks in Ukraine. So, where’s the industrial link? Well, there is none.

CyberX’s blog never details how they are making the analytical leap from “KillDisk now has a ransomware functionality” to “and it’s targeting industrial sites.” Instead, it appears the entire basis for their hypothesis is that Sandworm previously used KillDisk in the Ukraine ICS attack in 2015. While this is true, the Sandworm team has never just targeted one industry. iSight and others have long reported that the Sandworm team has targeted telecoms, financial networks, NATO sites, military personnel, and other non-industrial related targets. But it’s also not known for sure that this is still the Sandworm team. The CyberX blog does not state how they are linking Sandworm’s attacks on Ukraine to the TeleBots usage of ransomware. Instead they just cite ESET’s assessment that the teams are linked. But ESET even stated they aren’t sure and it’s just an assessment based off of observed similarities.

Or more simply put: CyberX put out a blog saying they uncovered new evidence that KillDisk had evolved into ransomware although they cite ESET’s discovery of this new evidence from weeks prior with no other evidence presented. They then make the claim that the TeleBots gang, the one using the ransomware, evolved from Sandworm but they offer no evidence and instead again just cite ESET’s assessment. They offer absolutely no evidence that this ransomware Killdisk variant has targeted any industrial sites. The logic seems to be “Sandworm did Ukraine, KillDisk was in Ukraine, Sandworm is TeleBots gang, TeleBots modified Killdisk to be ransomware, therefore they are going to target industrial sites.” When doing analysis always be aware of Occam’s razor and do not make too many assumptions to try to force a hypothesis to be true. There could be evidence of ransomware targeting industrial sites, it does make sense that they would eventually. But no evidence is offered in this article and both the title and thesis of the blog are completely unfounded as presented.


“Russian Operation Hacked a Vermont Utility, Showing Risk to U.S. Electrical Grid Security, officials say”

This story is more interesting than the others but too early to really know much. The only thing known at this point is that the media is already overreacting. The Washington Post put out an article on a Vermont utility getting hacked by a Russian operation with calls from the Vermont Governor condemning Vladimir Putin for attempting to hack the grid. Eric Geller pointed out that the first headline the Post ran with was  “Russian hackers penetrated U.S. electricity grid through utility in Vermont, officials say” but they changed to “Russian operation hacked a Vermont utility, showing risk to U.S. electrical grid, officials say.” We don’t know exactly why it was changed but it may have been due to the Post overreacting when they heard the Vermont utility found malware on a laptop and simply assumed it was related to the electric grid. Except, as the Vermont (Burlington) utility pointed out the laptop was not connected to the organization’s grid systems.

Electric and other industrial facilities have plenty of business and corporate network systems that are often not connected to the ICS network at all. It’s not good for them to get infected, and they aren’t always disconnected, but it’s not worth alarming anyone over without additional evidence.  However, the bigger analytical leap being made is that this is related to Russian operations.

The utility notes that they took the DHS/FBI GRIZZLY STEPPE report indicators and found a piece of malware on the laptop. We do not know yet if this is a false positive but even if it is not there is no evidence yet to say that this has anything to do with Russia. As I pointed out in a previous blog, the GRIZZLY STEPPE report is riddled with errors and the indicators put out were very non-descriptive data points. The one YARA rule they put out, which the utility may have used, was related to a piece of malware that is publicly downloadable meaning anyone could use it. Unfortunately, after the story ran with its hyped-up headlines Senator Patrick Leahy released a statement condemning the “attempt to penetrate the electric grid” as a state-sponsored hack by Russia. As Dimitri Alperovitch, CTO of CrowdStrike who responded to the Russian hack of the DNC, pointed out “No one should be making attribution conclusions purely from the indicators in the USCERT report. It was all a jumbled mess.”

Or more simply put: a Vermont utility acted appropriately and ran indicators of compromise from the GRIZZLY STEPPE report as the DHS/FBI instructed the community to do. This led to them finding a match to the indicator on a laptop separated from the grid systems but it’s not yet been confirmed that malware was present. The Vermont Governor Peter Shumlin then publicly chastised Vladimir Putin and Russia for trying to hack the electric grid. U.S. officials then inappropriately gave additional information and commentary to the Washington Post about an ongoing investigation which lead them to run with the headline that the this was a Russian operation. After all, the indicators supposedly were related to Russia because the DHS and FBI said so – and supposedly that’s good enough. Unfortunately, this also led a U.S. Senator to come out and condemn Russia for state-sponsored hacking of the utility.

Closing Thoughts

There are absolutely threats to industrial environments including ICS/SCADA networks. It does make sense that ICS breaches and attacks would be on the rise especially as these systems become more interconnected. It also makes perfect sense that ransomware will be used in industrial environments just like any other environment that has computer systems. And yes, the attribution to Russia compromising the DNC is very solid based on private sector data with government validation. But, to make claims about attacks and attempt to quantify it you actually have to present real data and where that data is coming from and how it was collected. To make claims of new ransomware targeting industrial networks you have to actually provide evidence not simply make a series of analytical leaps. To start making claims of attribution to a state such as Russia just because some poorly constructed indicators alerted on a single laptop is dangerous.

Or more simply put: be careful of analytical leaps especially when they are made without presenting any evidence leading into them. Hypotheses and analysis requires evidence else it is simply speculation. We have enough speculation already in the industrial industry and more will only lead to increasingly dangerous or embarrassing scenarios such as a US governor and senator condemning Russia for hacking the electric grid and scaring the public in the process when we simply do not have many facts about the situation yet.

Critiques of the DHS/FBI’s GRIZZLY STEPPE Report

December 30, 2016

On December 29th, 2016 the White House released a statement from the President of the United States (POTUS) that formally accused Russia of interfering with the US elections, amongst other activities. This statement laid out the beginning of the US’ response including sanctions against Russian military and intelligence community members.  The purpose of this blog post is to specifically look at the DHS and FBI’s Joint Analysis Report (JAR) on Russian civilian and military Intelligence Services (RIS) titled “GRIZZLY STEPPE – Russian Malicious Cyber Activity”. For those interested in a discussion on the larger purpose of the POTUS statement and surrounding activity take a look at Thomas Rid’s and Matt Tait’s Twitter feeds for good commentary on the subject.

Background to the Report

For years there has been solid public evidence by private sector intelligence companies such as CrowdStrike, FireEye, and Kaspersky that has called attention to Russian-based cyber activity. These groups have been tracked for a considerable amount of time (years) across multiple victim organizations. The latest high profile case relevant to the White House’s statement was CrowdStrike’s analysis of COZYBEAR and FANCYBEAR breaking into the DNC and leaking emails and information. These groups are also known by FireEye as the APT28 and APT29 campaign groups.

The White House’s response is ultimately a strong and accurate statement. The attribution towards the Russian government was confirmed by the US government using their sources and methods on top of good private sector analysis. I am going to critique aspects of the DHS/FBI report below but I want to make a very clear statement: POTUS’ statement, the multiple government agency response, and the validation of private sector intelligence by the government is wholly a great response. This helps establish a clear norm in the international community although that topic is best reserved for a future discussion.

Expectations of the Report

Most relevant to this blog, the lead in to the DHS/FBI report was given by the White House in their fact sheet on the Russian cyber activity (Figure 1).




Figure 1: White House Fact Sheet in Response to Russian Cyber Activity

The fact sheet lays out very clearly the purpose of the DHS/FBI report. It notes a few key points:

  • The report is intended to help network defenders; it is not the technical evidence of attribution
  • The report contains a combination of private sector data and declassified government data
  • The report will help defenders identify and block Russian malware – this is specifically declassified government data not private sector data
  • The report goes beyond indicators to include new tradecraft and techniques used by the Russian intelligence services

If anyone is like me, when I read the above I became very excited. This was a clear statement from the White House that they were going to help network defenders, give out a combination of previously classified data as well as validate private sector data, release information about Russian malware that was previously classified, and detail new tactics and techniques used by Russia. Unfortunately, while the intent was laid out clearly by the White House that intent was not captured in the DHS/FBI report.

Because what I’m going to write below is blunt feedback I want to note ahead of time, I’m doing this for the purpose of the community as well as government operators/report writers who read to learn and become better. I understand that it is always hard to publish things from the government. In my time working in the U.S. Intelligence Community on such cases it was extremely rare that anything was released publicly and when it was it was almost always disappointing as the best material and information had been stripped out. For that reason, I want to especially note, and say thank you, to the government operators who did fantastic work and tried their best to push out the best information. For those involved in the sanitation of that information and the report writing – well, read below.

DHS/FBI’s GRIZZLY STEPPE Report – Opportunities for Improvement

Let’s explore each main point that I created from the White House fact sheet to critique the DHS/FBI report and show opportunities for improvement in the future.

 The report is intended to help network defenders; it is not the technical evidence of attribution

There is no mention of the focus of attribution in any of the White House’s statements. Across multiple statements from government officials and agencies it is clear that the technical data and attribution will be a report prepared for Congress and later declassified (likely prepared by the NSA). Yet, the GRIZZLY STEPPE report reads like a poorly done vendor intelligence report stringing together various aspects of attribution without evidence. The beginning of the report (Figure 2) specifically notes that the DHS/FBI has avoided attribution before in their JARs but that based off of their technical indicators they can confirm the private sector attribution to RIS.



Figure 2: Beginning of DHS/FBI GRIZZLY STEPPE JAR

The next section is the DHS/FBI description which is entirely focused on APT28 and APT29’s compromise of “a political party” (the DNC). Here again they confirm attribution (Figure 3).



Figure 3: Description Section of DHS/FBI GRIZZLY STEPPE JAR

But why is this so bad? Because it does not follow the intent laid out by the White House and confuses readers to think that this report is about attribution and not the intended purpose of helping network defenders. The public is looking for evidence of the attribution, the White House and the DHS/FBI clearly laid out that this report is meant for network defense, and then the entire discussion in the document is on how the DHS/FBI confirms that APT28 and APT29 are RIS groups that compromised a political party. The technical indicators they released later in the report (which we will discuss more below) are in no way related to that attribution though.

Or said more simply: the written portion of the report has little to nothing to do with the intended purpose or the technical data released.

Even worse, page 4 of the document notes other groups identified as RIS (Figure 4). This would be exciting normally. Government validation of private sector intelligence helps raise the confidence level of the public information. Unfortunately, the list in the report detracts from the confidence because of the interweaving of unrelated data.



Figure 4: Reported RIS Names from DHS/FBI GRIZZLY STEPPE Report

As an example, the list contains campaign/group names such as APT28, APT29, COZYBEAR, Sandworm, Sofacy, and others. This is exactly what you’d want to see although the government’s justification for this assessment is completely lacking (for a better exploration on the topic of naming see Sergio Caltagirone’s blog post here). But as the list progresses it becomes worrisome as the list also contains malware names (HAVEX and BlackEnergy v3 as examples) which are different than campaign names. Campaign names describe a collection of intrusions into one or more victims by the same adversary. Those campaigns can utilize various pieces of malware and sometimes malware is consistent across unrelated campaigns and unrelated actors. It gets worse though when the list includes things such as “Powershell Backdoor”. This is not even a malware family at this point but instead a classification of a capability that can be found in various malware families.

Or said more simply: the list of reported RIS names includes relevant and specific names such as campaign names, more general and often unrelated malware family names, and extremely broad and non-descriptive classification of capabilities. It was a mixing of data types that didn’t meet any objective in the report and only added confusion as to whether the DHS/FBI knows what they are doing or if they are instead just telling teams in the government “contribute anything you have that has been affiliated with Russian activity.”


The report contains a combination of private sector data and declassified government data

This is a much shorter critique but still an important one: there is no way to tell what data was private sector data and what was declassified government data. Different data types have different confidence levels. If you observe a piece of malware on your network communicating to adversary command and control (C2) servers you would feel confident using that information to find other infections in your network. If someone randomly passed you an IP address without context you might not be sure how best to leverage it or just generally cautious to do so as it might generate alerts of non-malicious nature and waste your time investigating it. In the same way, it is useful to know what is government data from previously classified sources and what is data from the private sector and more importantly who in the private sector. Organizations will have different trust or confidence levels of the different types of data and where it came from. Unfortunately, this is entirely missing. The report does not source its data at all. It’s a random collection of information and in that way, is mostly useless.

Or said more simply: always tell people where you got your data, separate it from your own data which you have a higher confidence level in having observed first hand, and if you are using other people’s campaign names, data, analysis, etc. explain why so that analysts can do something with it instead of treating it as random situational awareness.


The report will help defenders identify and block Russian malware – this is specifically declassified government data not private sector data

The lead in to the report specifically noted that information about the Russian malware was newly declassified and would be given out; this is in contrary to other statements that the information was part private sector and part government data. When looking through the technical indicators though there is little context to the information released.

In some locations in the CSV the indicators are IP addresses with a request to network administrators to look for it and in other locations there are IP addresses with just what country it was located in. This information is nearly useless for a few reasons. First, we do not know what data set these indicators belong to (see my previous point, are these IPs for “Sandworm”, “APT28” “Powershell” or what?). Second, many (30%+) of these IP addresses are mostly useless as they are VPS, TOR exit nodes, proxies, and other non-descriptive internet traffic sites (you can use this type of information but not in the way being positioned in the report and not well without additional information such as timestamps). Third, IP addresses as indicators especially when associated with malware or adversary campaigns must contain information around timing. I.e. when were these IP addresses associated with the malware or campaign and when were they in active usage? IP addresses and domains are constantly getting shuffled around the Internet and are mostly useful when seen in a snapshot of time.

But let’s focus on the malware specifically which was laid out by the White House fact sheet as newly declassified information. The CSV does contain information for around 30 malicious files (Figure 5). Unfortunately, all but two have the same problems as the IP addresses in that there isn’t appropriate context as to what most of them are related to and when they were leveraged.



Figure 5: CSV of Indicators from the GRIZZLY STEPPE Report

What is particularly frustrating is that this might have been some of the best information if done correctly. A quick look in VirusTotal Intelligence reveals that many of these hashes were not being tracked previously as associated to any specific adversary campaign (Figure 6). Therefore, if the DHS/FBI was to confirm that these samples of malware were part of RIS operations it would help defenders and incident responders prioritize and further investigate these samples if they had found them before. As Ben Miller pointed out, this helps encourage folks to do better root cause analysis of seemingly generic malware (Figure 6).


Figure 6: Tweet from Ben Miller on GRIZZLY STEPPE Malware Hashes

So what’s the problem? All but the two hashes released that state they belong to the OnionDuke family do not contain the appropriate context for defenders to leverage them. Without knowing what campaign they were associated with and when there’s not appropriate information for defenders to investigate these discoveries on their network. They can block the activity (play the equivalent of whack-a-mole) but not leverage it for real defense without considerable effort. Additionally, the report specifically said this was newly declassified information. However, looking the samples in VirusTotal Intelligence (Figure 7) reveals that many of them were already known dating back to April 2016.



Figure 7: VirusTotal Intelligence Lookup of One Digital Hash from GRIZZLY STEPPE

The only thing that would thus be classified about this data (note they said newly declassified and not private sector information) would be the association of this malware to a specific family or campaign instead of leaving it as “generic.” But as noted that information was left out. It’s also not fair to say it’s all “RIS” given the DHS/FBI’s inappropriate aggregation of campaign, malware, and capability names in their “Reported RIS” list. As an example, they used one name from their “Reported RIS” list (OnionDuke) and thus some of the other samples might be from there as well such as “Powershell Backdoor” which is wholly not descriptive. Either way we don’t know because they left that information out. Also as a general pet peeve, the hashes are sometimes given as MD5, sometimes as SHA1, and sometimes as SHA256. It’s ok to choose whatever standard you want if you’re giving out information but be consistent in the data format.

Or more simply stated: the indicators are not very descriptive and will have a high rate of false positives for defenders that use them. A few of the malware samples are interesting and now have context (OnionDuke) to their use but the majority do not have the required context to make them useful without considerable effort by defenders. Lastly, some of the samples were already known and the government information does not add any value – if these were previously classified it is a perfect example of over classification by government bureaucracy.


The report goes beyond indicators to include new tradecraft and techniques used by the Russian intelligence services

The report was to detail new tradecraft and techniques used by the RIS and specifically noted that defenders could leverage this to find new tactics and techniques. Except – it doesn’t. The report instead gives a high-level overview of how APT28 and APT29 have been reported to operate which is very generic and similar to many adversary campaigns (Figure 8). The tradecraft and techniques presented specific to the RIS include things such as “using shortened URLs”, “spear phishing”, “lateral movement”, and “escalating privileges” once in the network. This is basically the same set of tactics used across unrelated campaigns for the last decade or more.



Figure 8: APT28 and APT29 Tactics as Described by DHS/FBI GRIZZLY STEPPE Report

This description in the report wouldn’t be a problem for a more generic audience. If this was the DHS/FBI trying to explain to the American public how attacks like this were carried out it might even be too technical but it would be ok. The stated purpose though was for network defenders to discover new RIS tradecraft. With that purpose, it is not technical or descriptive enough and is simply a rehashing of what is common network defense knowledge. Moreover, if you would read a technical report from FireEye on APT28 or APT29 you would have better context and technical information to do defense than if you read the DHS/FBI document.

Closing Thoughts

The White House’s response and combined messaging from the government agencies is well done and the technical attribution provided by private sector companies has been solid for quite some time. However, the DHS/FBI GRIZZLY STEPPE report does not meet its stated intent of helping network defenders and instead choose to focus on a confusing assortment of attribution, non-descriptive indicators, and re-hashed tradecraft. Additionally, the bulk of the report (8 of the 13 pages) is general high level recommendations not descriptive of the RIS threats mentioned and with no linking to what activity would help with what aspect of the technical data covered. It simply serves as an advertisement of documents and programs the DHS is trying to support. One recommendation for Whitelisting Applications might as well read “whitelisting is good mm’kay?”  If that recommendation would have been overlaid with what it would have stopped in this campaign specifically and how defenders could then leverage that information going forward it would at least have been descriptive and useful. Instead it reads like a copy/paste of DHS’ most recent documents – at least in a vendor report you usually only get 1 page of marketing instead of 8.

This ultimately seems like a very rushed report put together by multiple teams working different data sets and motivations. It is my opinion and speculation that there were some really good government analysts and operators contributing to this data and then report reviews, leadership approval processes, and sanitation processes stripped out most of the value and left behind a very confusing report trying to cover too much while saying too little.

We must do better as a community. This report is a good example of how a really strong strategic message (POTUS statement) and really good data (government and private sector combination) can be opened to critique due to poor report writing.

New Suspected Cyber Attack on Ukraine Power Grid – Advice as Information Emerges

December 19, 2016

Reporting in Ukraine has emerged indicating another suspected cyber attack on the electric grid (the first being the confirmed one in 2015). Initial reporting is often inaccurate or a small view of incidents but it’s worth cautiously watching and seeing what information emerges. Here’s what we know so far:

Reports of Suspected Cyber Attack:
Around noon of December 19th, 2016 reports began to surface related to a possible cyber attack on the Ukraine electric grid. The attack is suspected to have taken place near midnight local Ukraine time on the 17th. The Pivnichna transmission-level substations have been called out as possibly being the site attacked.  This is of course concerning for numerous reasons including the cyber attack on the Ukraine grid in December 2015 as well as traditional ongoing military actions in Ukraine. The reporting is from various Ukrainian sources including a press release from the impacted company Kyivenergo confirming that there was an unintentional outage and that they took actions to restore operations.

The first 24 and often 48 hours of reporting are notoriously bad for OSINT analysts but still should be utilized. Simply leverage caution and do not present information as facts yet. At this point I would assess with low confidence that the cyber attack has occurred. This is not to say there is doubt around the event only that there are other theories that have equal weighting until more evidence is available.  However, based on the sourcing of the information (internal Ukraine sources) and the Ukrainian grid operators’ experience dealing with a similar situation last year I have a higher trust level of the sources (thus the low confidence assessment that the attack is real). We will learn more later and it may be revealed that the outage was not related to a cyber attack; however I am aware of an investigation on going by Ukrainian authorities and they are treating the leading theory for the outage as a cyber attack.  I will caution again though that no one with direct knowledge of the attack has confirmed that it is a cyber attack; only that it is the leading theory and the disconnect was unintentional.

What Should Be Done:
Right now the best actions for those not on the ground or working at infrastructure companies is to wait and see if more information is revealed. Journalists should be cautious to infer or jump to conclusions and those in security community should stay tuned for more information. I would recommend journalists contact sources in the area but realize that the information is very preliminary and those not on the ground in Ukraine will have very little to add to knowledge on the situation.

If you are in the infrastructure (ICS/SCADA) security community it would be wise to use established channels to send decision makers a situational awareness report on the news; I would note it’s a low confidence assessment currently due to lack of first hand evidence but that it is a situation worth watching. This should be paired with security staff taking an active defense posture of monitoring the ICS network looking for abnormal activity. Preliminary information from the investigation underway by the Ukrainian authorities indicates that a remote attack is suspected.  I would stay far away from linking this to the Sandworm attack currently (attribution right now is not possible) but I would review the methods they achieved the remote attack on Ukraine last year and use that information to hunt for threats. As an example, look in logs for abnormal VPN session length, increased frequency of use, and unusual connection requests times.

If you happen to be a customer of Dragos, Inc. you will have received a notification already with some recommendations for strategic, operational, and tactical level players. Check your portal and be on the look out for a briefing request coming from us if you would like to attend remotely. For the wider community ensure that you are wary of phishing attempts taking advantage of this possible attack.

In Closing:
My chief recommendation is for everyone to avoid alarmism and utilize this as an opportunity to review logs and information from the ICS and search TTPs we’ve seen before such as remote usage of the ICS through legitimate accounts, VPNs, and remote desktop capabilities. If this attack turns out to be true it is unlikely it will be anything that is novel that couldn’t have been detected. It’s important to remember that defense is doable – now go do it.

Threats of Cyber Attacks Against Russia: Rationale on Discussing Operations and the Precedent Set

November 6, 2016

Reports that the U.S. government has military hackers ready to carry out attacks on Russian critical infrastructure has elicited a wide range of responses on social media. After I tweeted the NBC article a number of people responded with how stupid the U.S. was for releasing this information, or what poor OPSEC it was to discuss these operations, and even how this constitutes an act of war. I want to use this blog to put forth some thoughts of mine on those specific claims. However, I want to note in advance this is entirely my opinion. I wouldn’t consider this quality analysis or even insightful commentary but instead just my thoughts on the matter that I felt compelled to share since I work in critical infrastructure cyber security and was at one point a “military hacker.”

The Claim

The claim stems from an NBC article and notes that a senior U.S. intelligence official shared top-secret documents with NBC News. These top-secret documents apparently indicated that the U.S. has “penetrated Russia’s electric grid, telecommunications networks and the Kremlin’s command systems, making them vulnerable to attack by secret American cyber weapons should the U.S. deem it necessary.” I’m going to make the assumption that this was a controlled leak given the way that it was presented. Additionally, I make this assumption because of the senior officials that were interviewed for the wider story including former NATO commander (ret) ADM James G. Stavridis and former CYBERCOM Judge Advocate (ret) COL Gary Brown who likely would not have touched a true “leak” driven story without some sort of blessing to do so. I.e. before anyone adds that this is some sort of mistake this was very likely authorized by the President at the request of senior officials or advisers such as the Director of National Intelligence or the National Security Council. The President is the highest authority for deeming material classified or not and if he decided to release this information it’s an authorized leak. Going off of this assumption let’s consider three claims that I’ve seen recently.

The U.S. is Stupid for Releasing This Information

It is very difficult to know the rationale behind actions we observe. This is especially true in cyber intrusions and attacks. If an adversary happens to deny access to a server, did they intend to or was it accidentally brought down while performing other actions? Did the adversary intend to leave behind references to file paths and identifying information or was it a mistake? These debates around intent and observations is a challenge for many analysts that must be carefully overcome. In this case it is no different.

Given the assumption that this is a controlled leak it was obviously done with the intention of one or more outcomes. In other words, the U.S. government wanted the information out and their rationale is likely as varied as the members involved. While discussing a “government” it’s important to remember that the decision was ultimately in the hands of individuals, likely two dozen at the most. Their recommendations, biases, views on the world, insight, experience, etc. all contribute to what they expect the output of this leak to manifest as. This makes it even more difficult to assess why a government would do something since it’s more important to know the key members in the Administration, the military, and the Intelligence Community and their motivations rather than the historical understanding of government operations and similar decisions. Considering the decision was likely not ‘stupid’ and more for some intended purpose let’s explore what two of those purposes might be:


I’m usually not the biggest fan of deterrence in the digital domain as it has since not been very effective and the qualities to have a proper deterrent (credible threat and an understood threshold) are often lacking. Various governments lament about red lines and actions they might do if those red lines are crossed but what exactly those red lines are and what the response action will be if they are crossed is usually never explored. Here however, the U.S. government has stated a credible threat: the disruption of critical infrastructure in Russia (the U.S. has shown before that they are capable of doing this). They have combined this with a clear threshold of what they do not want their potential adversary to do: do not disrupt the elections. For these reasons my normal skepticism around deterrence is lessened. However, in my own personal opinion this is potentially a side effect and not the primary purpose especially given the form of communication that was chosen.

Voter Confidence

Relations between Russia and the U.S. this election have been tense. Posturing and messaging between the two states has taken a variety of forms both direct and indirect. This release to NBC though is interesting as it would be indirect messaging if positioned to the Russian government but it would be direct messaging if intended for the U.S. voters. My personal opinion (read: speculation) is that it is much more intended for the voters. At one point in the article NBC notes that Administration officials revealed to them that they delivered a “back channel warning to Russia against any attempt to influence next week’s vote”. There’s no reason to reiterate a back channel message in a public article unless the intended audience (in this case the voters) weren’t aware of the back channel warning. The article reads as an effort by the Administration to tell the voters: “don’t worry and go vote, we’ve warned them that any effort to disrupt the elections will be met with tangible attacks instead of strongly worded letters.”

It’s really interesting that this type of messaging to the American public is needed. Cyber security has never been such a mainstream topic before especially not during an election. This may seem odd to those in the information security community who live with these discussions on a day to day basis anyway. But coverage of cyber security has never before been mainstream media worthy for consistent periods of time. CNN, Fox, MSNBC, and the BBC have all been discussing cyber security throughout the recent election season ranging from the DNC hacks to Hillary’s emails. That coverage has gotten fairly dark though with CNN, NBC, Newsweek, and New York Times articles like this one and prime time segments telling voters that the election could be manipulated by Russian spies.

This CNN piece directly calls out the Kremlin for potentially manipulating the elections in a way that combines it with Trump’s claims that the election is rigged. This is a powerful combination. There is a significant portion of Trump’s supporters who will believe his claim of a rigged election and in conjunction with the belief that Russia is messing with the election it’s easy to see how a voter could become disillusioned with the election. Neither the Democrats or Republicans want less voters to turn out and (almost) all of those on both sides want the peaceful transition of power after the election as has always occurred before. Strong messaging from the Administration and others into mainstream news media is important to restore confidence to voters both in the election itself as well as the manner to which people vote.

Unfortunately, it seems that this desire is being accidentally countered by some in the security community. In very odd timing, Cylance decided to release a press release on vulnerabilities in voting machines on the same day, unbeknownst to them, as the NBC article. The press release stated that the intent of the release was to encourage mitigation of the vulnerabilities but with 4 days until the election, as of the article’s release, that simply will not be possible. The move is likely very well intended but unlikely to give voters much confidence in the manner to which they vote. I’ll avoid a tangent here but it’s worth mentioning the impact security companies can play on larger political discussions.

The Leak is Bad OPSEC

I will not spend as much time on this claim as I did the previous but it is worth noting the reaction that releasing this type of information is bad operational security. Operational security is often very important to ensure that government operations can be coordinated effectively without the adversary having the advance warning required to defend against the operation. However, in this case the intention of the leak is likely much more around deterrence or voter confidence and therefore the operation itself is not the point. Keeping the operation secret would not have helped either potential goal. More importantly, compromising information systems is not something that has ever been see as insurmountably difficult. For the U.S. government to reveal that it has compromised Russian systems does not magically make them more secure now. Russian defense personnel do not have anything more to go off of than before in terms of searching for the compromise, they likely already assumed they were compromised, and looking for a threat and cleaning it up across multiple critical infrastructure industries and networks would take more than 4 days even if they had robust technical indicators of compromise and insight (which the leak did not give them). The interesting part of the disclosure is not the OPSEC but in the precedence it sets which I’ll discuss in the next section.

The Compromises are an Act of War

Acts of war are governed under United Nations’ Article 2(4) where it discusses armed conflict. The unofficial rules regarding war in cyberspace are contained in the Tallinn Manual. In neither of these documents is the positioning of capabilities to do future damage considered an act of war. More importantly, in the NBC article it notes that the “cyber weapons” have not been deployed yet: “The cyber weapons would only be deployed in the unlikely event the U.S. was attacked in a significant way, officials say.” Therefore, what is being discussed is cyber operations that have gained access to Russian critical infrastructure networks but not positioned “weapons” to do damage yet. Intrusions into networks have never been seen as an act of war by any of the countries involved in such operations. So what’s interesting about this?  The claim by officials that the U.S. had compromised Russian critical infrastructure networks including the electric grid years ago.

For years U.S. intelligence officials have positioned that Russian, Chinese, Iranian, and at times North Korean government operators have been “probing” U.S. critical infrastructure such as the power grid. The pre-positioning of malware in the power grid has long been rumored and has been a key concern of senior officials. The acknowledgment in a possibly intended leak that the U.S. has been doing the same for years now is significant. It should come as no surprise to anyone in the information security community but as messaging from senior officials it does set a precedent internationally (albeit small given that this is a leak and not a direct statement from the government). Now, if capabilities or intrusions were found in the power grid by the U.S. government in a way that was made public the offending countries could claim they were only doing the same as the U.S. government. In my personal experience, there is credibility to claims that other countries have been compromising the power grid for years so I would argue against the “U.S. started it” claim that is sure to follow.  The assumption is that governments try to compromise the power grid ahead of time so that when needed they can damage it for military or political purposes. But the specific compromises that have occurred have not been communicated publicly by senior officials nor have they been done with attribution towards Russia or China. The only time a similar specific case was discussed with attribution was against Iran for compromising a small dam in New York and the action was heavily criticized by officials and met with a Department of Justice indictment.  Senior officials’ acknowledgment of U.S. cyber operations compromising foreign power grids for the purpose of carrying out attacks if needed is unique and a message likely heard loudly even if later denied. It would be difficult to state that the leak will embolden adversaries to do this type of activity if they weren’t already but it does in some ways make the operations more legitimate. Claiming responsibility for such compromises while indicting countries for doing the same definitely makes the U.S. look hypocritical regardless of how its rationalized.

Parting Thoughts

My overall thought is that this information was a controlled leak designed to help voters feel more confident in terms of both going to cast their ballots and in the overall outcome. Some level of deterrence was likely a side effect that the Administration sought. But no, this was not simply a stupid move nor was it bad OPSEC or an act of war. I also doubt it is simply a bluff. However, there is some precedent set and pre-positioning access to critical infrastructures around the world just became a little more legitimate.

One thing that struck me as new in the article though was the claim that the U.S. military used cyber attacks to turn out the lights temporarily in Baghdad during the 2003 Iraq invasion. When considering the officials interviewed for the story and the nature of the (again, possibly) controlled leak that is a new claim from senior government officials. There was an old rumor that Bush had that option on the table when invading Iraq but the rumor was the attack was cancelled for fear of the collateral damage of taking down a power grid. One can never be sure how long “temporary” might be when damaging such infrastructure. The claim in the article that the attack actually went forward would make that the first cyber attack on a power grid that led to outages – not the Ukrainian attack of 2015 (claims of a Brazilian outage years earlier were never proven and seem false from available information). However, the claim is counter to reports at the time that power outages did not occur during the initial hours of the invasion. Power outages were reported in Iraq but after the ending of active combat operations and looters were blamed. If a cyber attack in Iraq ever made sense militarily it would not have made as much sense after the initial invasion.

I’ve emailed the reporter of the story asking what the source of that claim was and I will update the blog if I get an answer. It is possible the officials stated this to the reporters but misspoke. In my time in the government it was not a rare event for senior officials to confuse details of operations or hear myths outside of the workplace and assume them to be true. Hopefully, I can find out more as that is a historically significant claim. Based on what is known currently I am skeptical that outages following the initial Iraq invasion in 2003 were due to a cyber attack.

A Collection of Resources for Getting Started in ICS/SCADA Cybersecurity

August 28, 2016

I commonly get asked by folks what approach they should take to get started in industrial control system (ICS) cybersecurity. Sometimes these individuals have backgrounds in control systems, sometimes they have backgrounds in security, and sometimes they are completely new to both. I have made this blog for the purpose of documenting my thoughts on some good resources out there to pass off to people interested; I will add to it over time as I find other resources I like. Do not attempt to do everything at once but it’s a good collection to refer back to in an effort to polish up skills or learn a new industry.  Rest assured, no matter how ill prepared you might feel in getting started realize that by having the passion to ask the question and start down the path you are already steps ahead of most. We need passionate people in the industry; everything else can be taught.

Optional Pre-Reqs

It’s always good to pick up a few skills regarding the fundamentals of computers, networks, and systems in general. I would recommend trying to pick up a scripting language as well; even if you don’t find yourself scripting a lot understanding how scripting works will add a lot of value to your skill set.

  • Learn Python the Hard Way
    • Learn Python the Hard Way is a great free online resource to teach you, step-by-step, the Python scripting languages. There’s a lot of different opinions about different scripting language. In truth, most of them have value in different situations so I’ll leave it to you to pick your own language (and I won’t tell you that you’re wrong for not learning Python, even though you are). Another good programming resource is Code Academy.
  • MIT Introduction to Computer Programming
    • MIT’s open courseware is a treasure for the community. It shocks me how many people do not take advantage of free college classes from top universities. This is the Introduction to Computer Science and Programming course. It should be taken at a slow pace but it’ll give you a lot of fundamental skills.
  • MIT Introduction to Electrical Engineering and Computer Science
    • Another MIT open course but this time focused on electrical engineering. This is a skill that will help you understand numerous types of control systems better as well as have a better grasp on how computers work.
  • Microsoft Virtual Academy
    • Microsoft Virtual Academy can be found at various locations on YouTube. I have linked to the first one; I would recommend browsing through the topic list for everything from fundamentals of networking, to fundamentals of computers, to how the Internet works.

Intro to Control Systems

Control systems run the world around us. Escalators, elevators, types of medical equipment, steering in our cars, and building automation systems are types of control systems you interact with daily. Industrial control systems (ICS) are industrial versions of control systems found in locations such as oil drilling, gas pipelines, power grids, water utilities, petrochemical facilities, and more. This section will go over some useful resources and videos to learn more about industrial control systems.

  • The PLC Professor
    • PLC Professor and his website contains a lot of great resources for learning what programmable logic controllers (PLCs) and other types of control systems and their logic are and how they work. Some resources are free while others are paid. At some point, getting a physical kit as a trainer to learn on is going to be a requirement.
  • Control System Basics
    • This is a great video explaining control system basics including the type of logic these systems use to sense and create physical changes to take action upon.
  • What is SCADA?
    • You’ve no doubt heard the term SCADA, if you haven’t you will. It stands for Supervisory Control and Data Acquisition and is a type of ICS. This video is a nice basic approach to explaining SCADA.
  • Department of Energy – Energy 101
    • The Department of Energy has a series of Energy 101 videos to explain basic concepts of different types of energy generation, sources, etc. It’s a fantastic series that should excite you about the field while explaining key terms and concepts.
  • Wastewater Treatment Explanation Video
    • We all need wastewater treatment facilities and learning about them helps you understand how control systems work and just how complex simple tasks in life can be (if we didn’t have control systems). These types of videos are important for you to watch and learn so that you get exposed to different industries. ICS is not really a community, it’s a collection of communities.
  • Waste Water – Flush to Finish
    • Another good wastewater explanation video.
  • Refinery Crude Oil Process
    • This is a video explaining a refinery crude oil process. If these types of videos don’t excite you to some extent you may be in the wrong career field. The world around us is magnificent and learning different industries will start to help you ask the right questions which will lead to your education on the subject.
  • Natural Gas Processing
    • This is an older video (the industry has definitely become more advanced than represented here) but extremely interesting on how natural gas is harvested, processed, and transferred. Think about all the control systems that have to go into this seeminly simple process.
  • How a Compressor Station Works
    • One particularly interesting (and historically difficult to secure) portion of the ICS community is the natural gas pipeline. This video talks about natural gas to some extent but really focused on compressor stations. Compressor stations as remote sites offer numerous opportunities and challenges to defenders. In short – they’re pretty cool.
  • Chemical Engineering YouTube Channel
    • A great series of videos explaining and showing different components of chemical processing.
  • Steel from Start to Finish
    • This is an example of how steel is made. The video, like the others in this section shows an important process that can help you understand all that goes into control system security. It’s important to know the real world impacts and applications of the processes we are trying to defend to fully understand how important safety and reliability are as the main component of industrial automation.  
  • Nuclear Reactor Explained
    • This is a simplistic but extremely easy to digest explanation and animation of a nuclear reactor. Nuclear energy has a bad rap due to pop culture but is a highly clean and safe form of energy. It’s really useful to understand this process and how these systems are designed and, idealy, isolated.
  • Nuclear Power Station
    • Building from the last video, here’s another video diving deeper into nuclear power. What you should focus on here is the design and engineering that go into the safety systems. Safety systems can be bypassed, there are no ‘unhackable’ things, but this helps you to understand just how these systems are designed to be safe by default even if not build with security in mind. The Fukushima event can be observed as a worst case and extremely unlikely scenario. Learning from it will be important; here you’ll find a good video on it.
  • Thermal Power Plant
    • There are many ways to generate power; this video explains thermal power and the complexity of the environment.
  • SCADA Utility 101
    • Rusty Williams has just the right type of southern speaking which makes an audience want to learn more. The guy is awesome, the video explains SCADA from an electric utility perspective, and this is a much watch.
  • Electric Generation and Transmission
    • Didn’t get enough of Rusty? Here’s another video of him explaining the generation and transmission of electricity.
  • Control Lectures
    • This is a fantastic series by Brian Douglas which covers a wide range of lectures on control systems in a very easy to process way.
  • Safety Systems
    • It’s good to get familiar with safety systems as well. Safety systems can either be active or passive. As an over simplification think of these as systems that take control of the system when an unsafe event occurs and helps to regulate it or shut it down safely. It can also be the product of good engineering instead of a dedicated system. Either way, there is a trend in the community to have integrated safety systems into one device; where the control device is also the safety device. This has cost savings but horrendous cyber security consequences and thus horrible safety consequences.
  • Safety Valves
    • Building on your understanding now of safety systems here’s an example of a safety valve in a process and how it can work to keep the operations, and more importantly the people around it, safe.
  • Industrial Disaster Explanation Videos
    • The U.S. Chemical Safety and Hazard Investigation Board has a number of videos explaining industrial disasters. This is an important resource to understand what can go wrong in industrial automation regardless of the cause (these are not cyber related but are important to understand as things that cyber could potentially cause if we are not careful). In IT, if things go wrong people do not generally die – in ICS death, injury, and environmental harm is a very real concern.

Intro to Computer and Network Security

There’s a lot of resources in the form of papers below (especially the SANS Reading Room) which are all great. However, you really need to get hands on so many of the resources are focused on tools and data sets. Try to read up as much as possible and then deeply dive into hands on learning.

  • The Sliding Scale of Cyber Security
    • I wrote this paper specifically to address the nebulous nature of “cyber security.” When people say they specialize in cyber security, what exactly does that mean? I put forth that there are 5 categories of investment that can be made. The prioritization for the value towards security should be towards the left hand side of the scale. It is ok to invest in multiple categories at once but understand the true return on investment you’re getting versus the cost.
  • VMWare
    • You’ll want to be able to set up Virtual machines (VMs) to get hands on with files and various security tools. VMWare is a great choice as is VirtualBox. VMWare has a free version you’ll want to use (Player). Don’t worry about getting Workstation or Player Pro until later when you are more experienced and want to save snapshots (copies of your VM to revert back to). Below you’ll find a sample video on VMs, feel free to Google around for better understanding.
  • Security Onion
    • You’re going to want to get hands on with the files presented in this guide; Security Onion is an amazing collection of free tools to do just that with a focus on network security monitoring and traffic analysis.
    • If you’re super cool you’ll want to get into forensics at some point; the SIFT VM from SANS is a collection of tools you’ll need to get started.
  • REMnux
    • Before you try out reverse engineering malware (REM) you’ll want to have a safe working environment to do so. This is not a beginner topic but at some point you’ll likely want to examine malware, Lenny’s REMnux VM is the safe place to do that.
  • Malware Traffic Analysis
    • Brad’s blog on malware traffic analysis is one of the best resources in the community. It combines sample files with his walk throughs of what they are and how to deal with them. You can learn a lot this way very quickly.
  • Open Security Training
    • This website is dedicated to open (free) security training. There are a number of qualified professionals who have dedicated time to teach things from the basics of security to advanced reverse engineering concept. You could spend quite a time on this website’s courses and all of them would make you more capable in this field. There are often full virtual machines (VMs), slides, and videos for the courses.
  • Sample PCAPs from NETRESEC
    • These packet capture samples are invaluable to learning how our systems interact on the network. Take a tool like Wireshark and analyze these files to get familiar with them and the practice (Wireshark will continually be your friend in any field you specialize in).
  • DEFCON Capture the Flag Files
    • DEFCON has made available their files (and often times walkthroughs) for their capture the flag contests. These range from beginner to advanced concepts in offensive security practices such as red teaming. Learning how to break into systems and how they fail is great for defense. It’s not required but it can be helpful.
  • Iron Geek
    • This is an invaluable collection of videos from conferences around the community. If you’re looking for a specific topic it’s a good idea to search these conference videos. Felt like you missed out on the last decade of security? Don’t worry most of it’s captured here.
  • SANS Reading Room
    • The SANS Institute is the largest and most trusted source of cyber security training. Their Reading Room is a free collection of papers written by students and instructors covering almost every topic in security.
  • Krebs on Security
    • Krebs puts together a great blog doing quality investigative research on breaches, incidents, and cyber security topics that are newsworthy. While doing your self-education keep an eye out for breaking and exciting stories.
  • Honeynet Project
    • Consider this a capstone exercise. Read up on honeypots and learn to deploy a honeypot such as Conpot. The idea is that to run a honeypot correctly you’ll have to learn about safeguarding your own infrastructure, setting up proxies and secure tunnels, managing cloud based infrastructure such as an EC2 server, performing traffic analysis on activity in the honeypot, malware analysis on discovered capabilities, and eventually incident response and digital forensics off of the data provided to explore the impact to the system. Working up to this point and then running a successful honeypot for any decent length of time really helps develop and test out a wide range of skills in the Architecture, Passive Defense, Active Defense, and (potentially in the form of Threat Intel) Intelligence categories of the Sliding Scale of Cyber Security.

Intro to Control System Cyber Security

Cybersecurity is not a new topic but in ICS it is mostly unexplored. The hardest part for most folks is learning who to listen to and what resources to read. There are a lot of “experts” out there who will quickly lead you astray; look at people’s resumes to see if they had the opportunity to do what they are speaking to you about. Because they don’t have experience doesn’t mean they are necessarily wrong but it’s an easy check. As an example, if someone calls themselves a “SCADA Security Guru” or something like a “thought leader” but they’ve only ever been a Chief Marketing Officer of an IT company, that should be a red flag. It is important to be very critical of information in this space but continually push forward to try to make the community better. Below are some trusted resources to help you on your journey.

  • An Abbreviated History of Automation and ICS Cybersecurity
    • This is a great SANS paper looking at the background on ICS cybersecurity. Well worth the read to make sure you understand many of the events that have occurred over the past twenty years and how they’ve inspired security in ICS today.
  • SANS ICS Library
    • This is the SANS ICS library which contains a number of posters and papers to get you started. Reference the blog as well for good explorations of topics. I write the Defense Use Case series as well which explores real and hyped up ICS attacks and lessons learned from them.
  • SCADAHacker Library
    • Joel has a fantastic collection of papers on ICS security, standards, protocols, systems, etc. Lots of valuable content in this collection.
  • The ICS Cyber Kill Chain
    • The attacks we are concerned most with on ICS take a different approach than traditional IT. This is a paper I wrote with Michael Assante exploring this and detailing the steps an adversary needs to take to accomplish their goals.
  • Analyzing Stuxnet (Windows Portion)
    • This is Bruce Dang’s talk at the 27th CCC in Germany on his exploration of analyzing Stuxnet. He was at Microsoft and was one of the first researchers to analyze it. This is a good understanding of the Windows portion of analysis. I show this video even though it’s a bit more advanced to highlight that there are often an IT and (operations technology) OT side of analysis.
  • Analyzing Stuxnet (ICS Portion)
    • Ralph Langer was responsible for deep diving into Stuxnet on it’s ICS payload portion. This talk gives a good understanding of the OT side of the analysis.   
  • To Kill a Centrifuge – Stuxnet Analysis
    • This is Ralph Langer’s excellent paper exploring the technical details on the Stuxnet malware and most importantly the ICS specific payload and impact. It is a good idea to read through the paper and Google the terms in the paper you do not understand.
  • SANS ICS Defense Use Case #5 – Ukraine Power Grid Attack
    • This is a paper I wrote with Michael Assante, and Tim Conway released through the E-ISAC on our analysis of the Ukraine power grid attack in 2015. There are also recommendations for defense at each level of the ICS kill chain (applying 1 control is never enough to stop attacks).
  • Perfect ICS Storm
    • Glenn wrote a great paper looking at the interconnectivity of ICS and the networks around them with considerations on how it impacts monitoring and viewing the control systems.
  • Network Security Monitoring in ICS 101
    • Here is a great intro talk on network security monitoring in an ICS by Chris Sistrunk at DEFCON 23. Network security monitoring is exceptionally useful in ICS because it can be done with minimal data sets and passively which works inside the confines of the safety and reliability requirements of an ICS network.
  • Achieving Network Security Monitoring Visibility with Flow Data
    • A SANS webcast with myself and Chris Sander exploring ICS network security monitoring and showing off his tool FlowBAT.
  • S4 Videos
    • The S4 conference run by Dale Peterson is a great community resource. He has posted a number of the conference presentations which will give you a great look at the ICS security community especially from the researcher perspective.
  • Defense Will Win
    • Dale Peterson’s excellent S4 talk that has an upbeat attitude of “defense will win.” This is something I completely agree with and for a few years now I have been championing the phrase “Defense is Doable” to help folks not get down when it comes to ICS cyber security. It may seem like the hardest challenge out there but it’s worthwhile and these are the most defensible environments on the planet; maybe not the most defended – but we will get there.  
  • The ICS Cyber Security Challenge
    • This is an annual challenge I put on sponsored by SANS which gives you access to questions and data sets for helping you progress your ICS cyber security skill set.

Recommended ICS Cybersecurity Books

  • Rise of the Machines: A Cybernetic History
    • It seems a bit odd to put a non-technical book as my first recommendation but I assure you it is with reason. Dr. Thomas Rid wrote this book to attempt to fully understand the history, implications, and usages of the word “cyber”. Delightfully, control systems have a major role throughout the book. It was control systems that got us started with “cybernetics” which is eventually where we would have the “cyber” word that fills our daily lives.
  • Handbook of SCADA/Control Systems Security
    • Robert (Bob) Radvanovsky and Jacob (Jake) Brodsky put together this wonderful collection of articles from people throughout the community. It covers a wide variety of topics from a wide variety of personalities and professionals.
  • Protecting Industrial Control Systems from Electronic Threats
    • Joe Weiss is a polarizing individual in the community but only because of how passionately he cares about the industry and how long he’s been in the community. Many of us here today in the community owe much to Joe. The scars he carries are from forging a path that has made ICS security much more mainstream.
  • Industrial Network Security
    • Eric Knapp and Joel Langill wrote this book looking specifically at the network security side of ICS. It’s a fantastic resource exploring different technologies and protocols by two professionals I’m glad to call peers and friends.
  • Hacking Exposed: Industrial Control Systems
    • This book takes a penetration testing focus on ICS and talks about how to test and assess these systems from the cybersecurity angle while doing it safely and within bounds of acceptable use inside of an ICS. It’s written by Clint Bodungen, Bryan Singer, Aaron Shbeeb, Kyle Wilhoit, and Stephen Hilt who all are trusted professionals in the industry.

Recommended Professional Training

You in no way need certifications or professional training to become great in this field. However, sometimes both can help either for job opportunities, getting a raise, or polishing up some skills you’ve developed. I highly encourage you to learn as much as you can before getting into a professional class (the more you know going in the more you’ll take away) and I encourage you to try to find an employer to pay your way (they aren’t cheap). If your employer doesn’t have a training policy it’s a good time to try and find a new employer. Here are two professional classes I like for ICS cyber security training (I’m biased because I teach at SANS but I teach there because I believe in what they provide).

  • SANS ICS 410 – ICS/SCADA Essentials
    • This class is designed to be a bridge course; if you are an ICS person who wants to learn security, or a security person who wants to learn ICS, this course offers the bridge between those two career fields and offers you an introduction into ICS cyber security.
  • SANS ICS 515 – ICS/SCADA Active Defense and Incident Response
    • This is the class I authored at SANS teaching folks about targeted threats (such as nation-state adversaries or well funded crime groups) that impact ICS and how to hunt them in your environment and respond to incidents.
    • Matt Luallen runs the CYBATI class. It’s a hands on class that’s been tried and tested and is popular around the community. He sometimes teaches it at SANS events and also teaches at other events. Matt was one of the first people I met in the ICS security community and has been like a brother to me over the years; he’s a fantastic resource for the community and more importantly he’s just a really good person. Learning from him (and getting to use his CYBATIworks kit which is a really useful training kit for sale) is something everyone should get to do at some point in their career.

Recommended Conferences

No matter how much time you spend reading or practicing eventually you need to become part of the community. Contributions in the form of research, writing, and tools are always appreciated. Contributions in the form of conference presentations are especially helpful as they introduce you to other interested folks. The ICS cybersecurity community is an important one on many levels. It’s one of the best communities out there with hard working and passionate people who care about making the world a safer place. Below are what I consider the big 5. These conferences are the ones that are general ICS cyber security (not a specific industry such as API for oil and gas or GridSecCon for electric sector) although those are valuable as well.

  • SANS ICS Security Summit
    • For over a decade the SANS ICS Security Summit has been a leading conference on bringing together researchers, industry professionals, and government audiences. The page above links to the various SANS ICS events but look for the one that says “ICS Security Summit” each year. It is usually held in March at Disney World in Orlando Florida. It’s strong suit is the educational and training aspects not only because of the classes but also because of the strong industry focus.
  • DigitalBond’s S4
    • The S4 conference is a powerhouse of leading ICS security research. Dale puts on a fantastic conference every year (now with a European and Japanese venue as well each year) that brings together some of the most cutting edge research and ideas. S4 in the US is often held in January in Florida.
  • The ICS Cyber Security Conference (WeissCon)
    • Affectionately known as WeissCon after it’s founder Joe Weiss, the conference is now owned and operated by SecurityWeek and usually runs in October at different locations each year in the US (Georgia is usually a central location for the conference though). The conference brings together a portion of the community not often found at the other locations and has a strong buy-in from the government community as well as the vendor community.
  • The ICS Joint Working Group (ICSJWG)
    • The ICSJWG is a free conference held twice a year by the Department of Homeland Security. I often encourage people to go to the ICSJWG conference first as a type of intro into the community, to then go to the SANS ICS Security Summit for more view into the asset owner community and to get training, then go to S4 for the latest research, to go to WeissCon to see some of the portions of the community and vendor audience not represented elsewhere, and finally to 4SICS to get an international view. It is perfectly ok to go to all five of the big conferences a year (I do) but if you need a general path that is the one I would follow initially.
  • 4SICS
    • The 4SICS is held every year in Stockholm, Sweden usually in October and is a fantastic collection of ICS professionals from around Europe. The conference usually attracts the same type of research and big named audience that you would find at S4 but with deep roots in Europe as represented by its founders Erik and Robert. They are two of the friendliest people in the ICS community and have a wealth of experience from decades of experience defending infrastructure. Stockholm is cold in the winter but the people and their optimism will keep you warm.

This is just a small collection of a lot of the fantastic resources out there. I will continually try to update it as especially good materials are made available. Always fight to be part of the community and interact – that is where the real value in learning is. Never wait to have someone show you though, even the “experts” are usually only expert in a few things. It is up to you to teach yourself and involve yourself. We as a community are waiting open armed.

Intelligence Defined and its Impact on Cyber Threat Intelligence

August 25, 2016

Michael Cloppert wrote a great piece to argue for a new definition of cyber threat intelligence. The blog is extremely well written (I personally love the academic style and citations) and puts forth a good discussion on operations. Sergio Caltagirone published a rebuttal equally valuable where he agreed with Mike that there is accuracy missing from current cyber threat intelligence definitions but noted that Mike focused too much on operations. The purpose of this blog is not to rebut their findings but to add to the conversation. In many aspects I agree with both Mike and Sergio; I would highlight that the forms of intelligence discussed though are very policy focused (sometimes even military focused) and influence how we define cyber threat intelligence. I do not envision that between these three blogs we’ve settled a long standing debate on intelligence but the intent is to add to the discussion and encourage thoughts by others.

In Mike’s piece the definition he presented for the field of cyber threat intelligence is the “union of cyber threat intelligence operations and analysis” each of which he previously defined. Sergio responded by stating “Intelligence doesn’t serve operations, intelligence serves decision-making which in turn drives operations to achieve policy outcomes.” I agree with this understanding of intelligence to meet policy needs and while Sergio intentionally does not intend to cover all aspects of intelligence outside of policy I believe it is important to consider. Mike teased out at one point that “…’intelligence’ more broadly is a bias toward a particular type of intelligence, and they continue to overwhelmingly focus on geopolitical outcomes.” He gives an example of business intelligence as another form of intelligence and accepts that the basis of intelligence is interpreted information with an assessment to advance an interest. This is where he stops though in an effort to stay focused on defining cyber threat intelligence. This is where I would like to begin.

Dr. Michael S. Goodman, a professor of intelligence studies at Kings College in London, wrote a piece for the CIA’s Center for the Study of Intelligence where he discussed the challenges and benefits in studying and teaching intelligence. He specifically noted that “The academic study of intelligence is a new phenomenon” although the field of intelligence itself is very old. More relevantly to this blog post he wrote that “Producing an exact definition of intelligence is a much-debated topic.” In a non-government intelligence focused piece the University of Oregon has a page dedicated to the theories and definitions of intelligence. There, they cite psychologists and educators Howard Gardner, David Perkins, and Robert Sternberg to assign attributes to intelligence and state that it is a combination of the ability to:

  • Learn
  • Pose Problems
  • Solve Problems

These three attributes are core to any definition of intelligence whether it’s business intelligence, emotional intelligence, or military intelligence. Additionally, the distinctly human component of this process, for those of you considering artificial intelligence as you read this, is harder to capture but likely exists in the ability to pose and solve problems. Machines can pose and solve problems to an extent but how they do that sets them apart from humans. More to the point, how each of us pose and solve problems is influenced at some level by bias. That bias is often an influence analysts seek to minimize so that it does not jade how we analyze problems and the answers we derive. However, that bias in how we pose and solve problems is likely the only distinctly human component of intelligence. That is a discussion for a longer future piece though.

Further in the University of Oregon piece, different types of intelligences are listed from Gardner, Perkins, and Sternbeg. A few are listed below:

  • Linguistic
  • Intrapersonal
  • Spatial
  • Practical
  • Experiential
  • Neural
  • Reflective

These different types of intelligence are not all encompassing and focus on the psychological more than classic government intelligence. However, they offer a more robust view into what it means to be able to process and analyze information which is in of itself core to cyber threat intelligence. I gravitate more towards Robert Sternberg’s understanding of intelligence and specifically his view of experiential and componential intelligence. According to his 1988 and 1997 writings on intelligence experiential intelligence is “the ability to deal with novel situations; the ability to effectively automate ways of dealing with novel situations so they are easily handled in the future; the ability to think in novel ways.” His understanding of componential intelligence is “the ability to process information effectively. This includes metacognitive, executive, performance, and knowledge-acquisition components that help to steer cognitive processes.”

I enjoy these two the most because they seem to map the closest to the idea of intelligence generation and intelligence consumption. In the field of cyber threat intelligence we often hear vendors, security researchers, and companies talk about “threat intel” and standing up teams to do intel-y things but without specific guidance. There is a stark difference in generating intelligence and in consuming it. Most companies are looking for threat intelligence consumption teams (those that can map their organization’s requirements and search for what is available to help drive defense) not threat intelligence generation teams (those individuals who analyze adversary information to extract knowledge which may or may not be immediately useful). A good team is usually the mix of both but with a clear understanding of which one is the priority and which effort is the goal at any given time. Sternberg’s experiential intelligence speaks more to threat intelligence generation whereas his componential intelligence addresses the ability to process, or consume, intelligence. The definitions are not as simple as this but it is thought provoking.

In reviewing Mike and Sergio’s excellent blog posts with the addition of a wider view on intelligence both from a classical, psychological, and philosophical aspect there are attributes that emerge. These attributes mean that intelligence:

  • Must be analyzed information
    • To perform analysis is a distinctly human trait likely due to our influence of bias and our efforts to minimize it (i.e. no $Vendor your tool does not create intelligence) meaning that it is always up to our interpretation and others may have other valuable and even competing interpretations
  • Must meet a requirement
    • Requirements can be wide ranging such as policy, military operations, geo-political, business, friendly forces movements and tactics, or self-awareness; the lack of a requirement would result in intelligence not being useful and by that extension be an inhibitor to intelligence (i.e. overloading analysts with indicators of compromise is not intelligence)
  • Must respect various forms
    • There is no one definition of intelligence but each definition must allow for different ways of interpreting, processing, and using the intelligence

To further qualify to be threat intelligence the presented intelligence must be about threats; threats are not only geo-political in nature but also may encompass insiders. However, I disagree with the notion that there is an unwitting insider threat because the definition of threat I subscribe to must have the following three attributes:

  • Opportunity
    • There must be the ability to do harm. In many organizations this means knowing your systems, people, vulnerabilities, etc.
  • Intent
    • There must be an intention to do harm, if it is unintentional the harm is still as impactful but it cannot be properly classified as a threat. Understanding adversary intention is difficult but this is where analysis comes in understanding the threat landscape
  • Capability
    • The adversary must have some capability to do you harm. This may be malware, it may be PowerShell left running in your environment, and it could be non-technical such as the means to influence public perception through leaked documents

Therefore, I use the following definition, heavily inspired by classic definitions, for intelligence: “The process and product resulting from the interpretation of raw data into information that meets a requirement.” The product may be knowledge, it may be a report, it could be tradecraft of an adversary, etc. Further, I use the following definition for cyber threat intelligence “The process and product resulting from the interpretation of raw data into information that meets a requirement as it relates to the adversaries that have the intent, opportunity and capability to do harm.” (Note that in this definition of cyber threat intelligence the adversary is distinctly human. Malware isn’t the threat; the human or organization of humans intending you harm is the threat.) Each definition is concise but open-ended enough to serve multiple purposes beyond military intelligence.

I in no way think that this solves any aspect of this debate. And I do not feel that my definitions actually conflict with what Mike and Sergio have put forward but are instead meant simply as an extension of the topic. Mike and Sergio are both extremely competent individuals that I am privileged to call my friends, peers, and over numerous occasions mentors. However, their blogs inspired me to explore the topic for myself and this blog was simply my way to share my opining on my findings. I hope it has been useful in some manner to your own exploration.

Common Analyst Mistakes and Claims of Energy Company Targeting Malware

July 13, 2016

A new blog post by SentinelOne made an interesting claim recently regarding a “sophisticated malware campaign specifically targeting at least one European energy company.”  More extraordinary though was the claim by the company that this find might indicate something much more serious: “which could either work to extract data or insert the malware to potentially shut down an energy grid.” While that is a major analytical leap, we’ll come back to this, the next thing to occur was fairly predictable – media firms spinning up about a potential nation-state cyber attack on power grids.

I have often critiqued news organizations in their coverage of ICS/SCADA security when there was a lack of understanding of the infrastructure and its threats but this sample of hype originated from SentinelOne’s bold claims and not the media organizations. (Although I would have liked to see the journalists validate their stories more). News headlines included “Researchers Found a Hacking Tool that Targets Energy Grids on the Dark Web” to EWeek’s “Furtim’s Parent, Stuxnet-like Malware, Aimed at Energy Firms.” It’s always interesting to see how long it takes for an organization to compare malware to Stuxnet. This one seems to have won the race in terms of “time-to-Stuxnet”, but the worst headline was probably The Register’s with “SCADA malware caught infecting European energy company: Nation-state fingered”. No this is not SCADA malware and no nation-states have been fingered (phrasing?).

The malware is actually not new though and had been detected before the company’s blog post. The specific sample SentinelOne linked to, that they claim to have found, was first submitted to VirusTotal by an organization in Canada on April 21st, 2016. Later, a similar sample was identified and posted on the forum on April 25th, 2016 (credit to John Franolich for bringing it to my attention). On May 23rd, 2016 a KernelMode forum user posted on their blog some great analysis of the malware. The KernelMode users and blogger identified that one of the malware author’s command and control servers was misconfigured and revealed a distinct naming convention in the directories that very clearly seemed to correlate to infected targets. In total there were over 15,000 infected hosts around the world that had communicated to this command and control server. This puts a completely different perspective on the malware that SentinelOne claimed was specifically targeting an energy company and it’s obvious it is most certainly not ICS/SCADA or energy company specific. It’s possible energy companies are a target, but so far there’s no proof of that provided.

I do not have access to the dataset that SentinelOne has so I cannot and will not critique them on all of their claims. However, I do find a lot of the details they have presented odd and I also do not understand their claims that they “validated this malware campaign against SentinelOne [their product] and confirmed the steps outlined below [the malware analysis they showed in their blog] were detected by our Dynamic Behavior Tracking (DBT) engine.” I’m all for vendors showcasing where their products add value but I’m not sure how their product fits into something that was submitted to VirusTotal and a user forum months before their blog post. Either way, let’s focus on the learning opportunities here to help educate folks on potential mistakes to avoid.

Common Analyst Mistake: Malware Uniqueness

A common analyst mistake is to look at a dataset and believe that malware that is unique in their dataset is actually unique. In this scenario, it is entirely possible that with no ill-intention whatsoever SentinelOne identified a sample of the malware independent from the VirusTotal and user forum submission. Looking at this sample and not having seen it before the analysts at the company may have made the assumption that the malware was unique and thus warranted their statement that this campaign was specifically targeting an energy company. The problem is, as analysts we always work off of incomplete datasets. All intelligence analysis operates from the assumption that there is some data missing or some unknowns that may change a hypothesis later on. This is one reason you will often find intelligence professionals give assessments (high, medium, or low confidence assessments usually) rather than making definitive statements. It is important to try to realize the limits of our datasets and information by looking to open source datasets (such as searching on Google to find the previous KernelMode forum post in this scenario) or establishing trust relationships with peers and organizations to share threat information. In this scenario the malware was not unique and determining that there were at least 15,000 victims in this campaign would add doubt that a specific energy company was the target of the campaign. Simply put, more data and information was needed.

Common Analyst Mistake: Assuming Adversary Intent

As analysts we often get familiar with adversary campaigns and capabilities to an almost intimate level knowing details ranging from behavioral TTPs to the way that adversaries run their operations. But one thing we as analysts must be careful of is assuming an adversary’s intent. Code, indicators, TTPs, capabilities, etc. can reveal a lot. They can reveal what an adversary may be capable of doing and they should reveal the potential impact to a targeted organization. It is far more difficult though to determine what an adversary wishes to do. If an adversary crashes a server an analyst may believe the malicious actor wanted to deny service to it whereas the actor just messed up. In this scenario the SentinelOne post stopped short of claiming to know what the actors were trying to do (I’ll get to the power grid claims in a following section) but the claim that the adversary specifically targeted the European energy company is not supported anywhere in their analysis. They do a great job of showing malware analysis but do not offer any details around the target nor how the malware was delivered. Sometimes, malware infects networks that are not even the adversary’s target. Assuming the intent of the adversary to be inside specific networks or to take specific actions is a risky move and even worse with little to no evidence.

Common Analyst Mistake: Assuming “Advanced” Means “Nation-State”

It is natural to look at something we have not seen before in terms of tradecraft and tools and assume it is “advanced.” It’s a perspective issue based on what the analyst has seen before. It can lead to analysts assuming that something particularly cool must be so advanced that it’s a nation-state espionage operation. In this scenario, the SentinelOne blog authors make that claim. Confusingly though, they do not seem to have even found the malware on the energy company’s network they referenced. Instead, the SentinelOne blog authors claimed to have found the malware on the “dark web”. This means that there would not have been accompanying incident response data or security operations data to support a full understanding of this intrusion against the target, if we assume the company was a target. There are non-nation-states that run operations against organizations. HackingTeam was a perfect example of a hackers-for-hire organization that ran very well-funded operations. SentinelOne presents some interesting data and along with other data sets this could reveal a larger campaign or even potentially a nation-state operation – but nothing presented so far supports that conclusion right now. A single intrusion does not make a campaign and espionage type activity with “advanced” capabilities does not guarantee the actors work for a nation-state.

Common Analyst Mistake: Extending Expertise

When analysts become experts on their team in a given area it is common for folks to look to them as experts in a number of other areas as well. As analysts it’s useful to not only continually develop our professional skills but to challenge ourselves to learn the limits of our expertise. This can be very difficult when others look to us for advice on any given subject. But being the smartest person in the room on a given subject does not mean that we are experts on it or even have a clue of what we’re talking about. In this scenario, I have no doubt that the SentinelOne blog authors are very qualified in malware analysis. I do however severely question if they have any experience at all with industrial and energy networks. The claim that the malware could be used to “shut down an energy grid” shows a complete lack of understanding of energy infrastructure as well as a major analytical leap based on a very limited data set that is quite frankly inexcusable. I do not mean to be harsh, but this is hype at its finest. At the end of their blog the authors note that if anyone in the energy sector would like to learn more that they can contact the blog authors directly. If anyone decides to take them up on the offer, please do not assume any expertise in that area, be critical in your questions, and realize that this blog post reads like a marketing pitch.

Closing Thoughts

My goal in this blog post was not to critique SentinelOne’s analysis too much, although to be honest I am a bit stunned by the opening statement regarding energy grids. Instead, it was to take an opportunity to identify some common analyst mistakes that we all can make. It is always useful to identify reports like these and without malice to tear apart the analysis presented to identify knowledge gaps, assumptions, biases, and analyst mistakes. Going through this process can help make you a better analyst. In fairness though, the only reason I know a lot about common analyst mistakes is because I’ve made a lot of rookie mistakes at one point or another in my career. We all do. The trick is usually to try not to make a public spectacle out of it.

IRONGATE Malware – Lessons Learned for ICS/SCADA Defenders

June 2, 2016

I first posted this blog on the SANS ICS blog here.


FireEye uncovered a new piece of ICS malware that they released today and their way of approaching it both to the public and in pre-briefing to the media has been outstanding. The malware is not in the wild, is not a threat to the industry, but offers lessons learned and I believe the FireEye/Mandiant team’s handling of it deserves a good nod. This blog post will explain the background context to the malware, the details of the malware with my thoughts, and why it is important.

On April 25th I noticed the S4 European agenda noted that Rob Caldwell of the Mandiant ICS team would be doing a presentation on new ICS malware. I posted the blog here with some thoughts on what this meant for the community. Personally, I’m excited about DigitalBond’s inaugural ICS conference in Europe (the long running S4 in the US and Japan has always been a valued contribution to the larger community). I will offer one small critique though that is not meant to be negative in tone. The DigitalBond website lists that the malware is in the “wild”; the ICS malware that Mandiant released details about this morning, titled IRONGATE, is not in the wild, in my opinion. “In the wild” does not have a strict definition so I don’t think DigitalBond did anything wrong here, they were being extremely good stewards of the community to bring this information forward but when I consider malware “in the wild” I look for something actively infecting organizations. I was fortunate enough to exchange messages with Dale Peterson, who runs DigitalBond, and his understanding of the importance of the malware while avoiding hype matched my own. Shortly stated, he is spot on with his approach and I’m looking forward to the presentation on the malware at his conference. It is important to keep in mind though that this malware is more of a proof-of-concept or a security research project but still important. I will cover these details in the next section though.

After posting my blog I was able to get some insight into the malware from the Mandiant team under an embargo. I normally wouldn’t talk openly about the “behind the scenes” discussions but I want to call attention to this for a very heartfelt kudos to the Mandiant ICS team. I’m not a journalist and in no way could help the FireEye/Mandiant public relations (PR) effort with regards to the malware they were releasing. Like it or not this kind of PR has value to companies and so capitalizing on it is a big motivation to any company. But they wanted to run their analysis by me for outside critique. This is a lesson many companies would benefit from: no matter how expert your staff is it is always valuable to get an outside opinion to ensure you are defeating biases and groupthink. Luckily for both the Mandiant team and myself I was excited about their thoughts. They called attention to why the malware was important but were careful to note that it is not technically advanced, it is not an active threat, and that there should be caution in overhyping the malware itself.

Details on IRONGATE
IRONGATE is the name for a family of malware that the FireEye Labs Advanced Reverse Engineering (FLARE) team identified in late 2015. The malware targets a simulated Siemens control system environment and replicates the type of man-in-the-middle attack seen in Stuxnet. The man-in-the-middle attack allows it to record normal traffic to a PLC and play it back to an HMI. It has a number of decent capabilities like anti-Virtual Machine checks and Dynamic Link Library (DLL) swapping with a malicious DLL. In many ways it looks like a cool project although not technically advanced. The most important aspect of the malware itself is that it shows the interest and understanding of impacting ICS by the authors. The full report by FireEye has a good coverage of the malware. Moving past the technical paper though there are a few key points to extract out about the malware itself. The following is my analysis and should not be considered definitive:

  • The malware is the payload portion and not the delivery mechanism; the delivery mechanism has not been identified and likely does not exist. The malware looks to me as a research project, penetration testing tool, or a capability developed to test out a security product for ICS networks. This means it is standalone and I do not believe we will ever see it or a derivative in the wild although we may see the tactic it displayed in a different capability by different authors in the future
  • The main portions of the malware are written in Python and Mandiant identified the malware by searching VirusTotal. While the discovery of the malware was in 2015 the malware was submitted through the web interface (not automatically through an API) in Israel in 2014. The combination of a manual upload, the malware not being in the wild (e.g. actively infecting sites), and the tool being written in Python against a simulated environment makes me think that the malware is a penetration testing or security product demo tool and not a proof-of-concept for a capable adversary. Generally speaking, APTs do not normally write tools in Python and submit them to VirusTotal
  • The malware’s attention to ICS and its focus on mimicking capabilities present in Stuxnet reinforced what many of us in the community knew: ICS is a viable target and attackers are getting smarter on how to impact ICS with ICS specific knowledge sets. The unique nature of ICS offers defenders many advantages in countering adversaries but it is not enough. You cannot rest on the fact that “ICS is unique” or “ICS can be hard to figure out” as a defense mechanism. It is a great vantage point for defenders but must be taken advantage of or adversaries will overcome it.

With what appears to be a bit of downplaying the malware the question remains: is this important and why? The answer is: yes it is important.

Why Is This Important?
I’m personally not a huge fan of naming malware, especially malware not in the wild, so I will admit I’m not crazy about naming this tool “IRONGATE” but this is the state of the industry today and I would expect the same out of any security company. The malware is not in the wild and in my opinion looks more to be a research tool than it does a bearer of things to come. So why is it important? Simply put, we do not know a lot about the ICS specific capabilities and malware in the community. In other words: we do not have much insight into the ICS threat landscape. We often have overhyped and inaccurate numbers of incidents in the community (Dell’s 2015 review stating there were hundreds of thousands of ICS cyber attacks) or abysmally low numbers (the ICS-CERT’s metrics of ~250 incidents a year which are primarily reported from the intelligence community and not asset owner themselves). Somewhere in between those two numbers is the truth pertaining to how many cyber incidents there are each year in the ICS community. A much smaller portion of those incidents are targeted espionage, theft, or attacks. And yet we only know of three ICS specifically tailored pieces of malware: Stuxnet, HAVEX, and BlackEnergy2. Over focusing on the malware instead of the human threats who intend harm is a mistake. But malware is still the tool of choice for many adversaries. So learning about these tools and the tradecraft of adversaries is extremely important. Yet we are lacking insight into those data sets. This makes the IRONGATE malware more interesting in the ICS community than it would be in an IT security discussion.
Here are, in my opinion, the three most important takeaways from this piece of malware.

  1. The malware was submitted in 2014 to VirusTotal and no security vendor alerted on it. Against dozens of security products none flagged this as malicious. It was written in Python, had a module titled scada.exe, and was obviously malicious yet no product flagged it. If ICS related malware is sitting in public data sets undiscovered by the vendors who focus entirely on detecting malware then you can be sure that there is malware in ICS environments today that we have not even begun to identify and understand.
  2. The Mandiant ICS team at FireEye released a number of pieces of technical information such as MD5 hashes and sample names for the pieces of this malware family. Although I disagree in calling them indicators of compromise (IOCs) because nothing was compromised, the technical details are an important exercise. With this information, could a standard ICS organization search through the network traffic and host information for matches against this malware? I would state that the significant majority of the industry could not. That’s a serious issue. This is likely researcher malware or a pentesting tool and we as a community could not search community-wide for it. Although I’m a huge advocate of the amazing work being done in the industry today we are simply behind where we need to be. If we cannot search our environments for possible pentester malware we’re playing in a different league to find the top capabilities of foreign intelligence teams.
  3. Nothing displayed in this capability or any capability we have seen to date (Stuxnet, HAVEX, and BlackEnergy2 as well as non-targeted capabilities such as Slammer and Conficker) would be undetectable to a human analyst. We need security products in our environments although we cannot rely solely upon them. Those passive defenses that rule out the Conficker infections and ransomware malware are important to rule out noise that human analysts shouldn’t be focusing on. But at the end of the day it must be human defenders against human adversaries that secure the ICS. That active defense approach of monitoring the ICS and responding to real threats is required. Nothing about the capabilities to date would have bypassed human defenders. The harsh truth is many organizations simply are not looking at their ICS networked environment. The harsh truth is that as a community we are not where we need to be today but the inspiring reality is that we can change the industry quickly by countering threats in our environment by empowering and training our people to do so.

In closing, I would caution the community to not overhype IRONGATE. The Mandiant ICS team did not overhype it nor should journalists. It could be a resume generating event to go to your management and claim that the malware itself is a reason for security investments at the company. But this is still an important find. Going to decision makers and ICS organizations and simply asking: “How would you find this if it was in your network?” is an important question and exercise. Hype is not needed to show why this is worth your time to discuss and counter. Ultimately, as we consider this malware and capabilities like it we as a community will move to better understand the threat landscape and counter the capabilities we have not discovered yet. Defense is doable – but you actually have to do something, you cannot just buy a product to place on the network and claim victory.