Russian Election Meddling, GRIZZLYSTEPPE, and Bananas

August 17, 2017

It’s been awhile since I’ve been able to post to my blog (as it turns out doing a Series A raise for my company Dragos has been time consuming so I apologize for the absence in writing).  But it is fitting that my first blog post in awhile has something to do with the GRIZZLYSTEPPE report. I almost got sucked back into writing when I saw the Defense Intelligence Agency (DIA) tweet out the Norse cyber attack map.

Matt jumped on it pretty quickly though which was great.

I tried to attempt to fill the person in running the account just in case they didn’t understand why folks were less than excited about their presentation.

But in their responses to me it seemed they didn’t fully understand. They articulated that they use unclassified data for the conference but use classified data at work. Of course the problem wasn’t the data (even though it’s not just unclassified but completely bad/fake data) it’s the idea that a cyber attack map aka “pew pew map” is not a good way to communicate to any audience as its simply a marketing ploy. However, it’s not worth a full blog post so I’ll just instead request everyone to do their homework (should only be a quick Google search) on why pew pew maps are stupid and everyone serious in the discussion should stop using them.

On To the Main Discussion

But on to the main topic. What does Russian election meddling, the GRIZZLYSTEPPE report, and bananas all have in common? Absolutely nothing. Each are individually completely unrelated to each other and people should stop putting any of them together as it ultimately just makes people look silly (to be fair no one’s associated bananas with the election interference yet but it might be a better correlation than the GRIZZLYSTEPPE report).

This discussion was all spawned by an article that the New York Times released on August 16th, 2017 titled “In Ukraine, a Malware Expert Who Could Blow the Whistle on Russian Hacking“. Spoiler alert: he can’t. I went on a bit of a Twitter rant to explain why the article wasn’t good, it can be found here, but I felt it was a complex and an important enough topic to cover in a blog.

The NYT piece posits that a hacker known by his alias “Profexer” was responsible for writing the P.A.S. tool and is now a witness for the FBI after coming forward to Ukrainian police. The P.A.S. tool, the article puts forward, was leveraged by Russia’s intelligence services without his knowledge (not sure how he can be a “witness” then but I digress). The authors of the article previously explicitly stated P.A.S. was used in the break-in of the Democratic National Committee  (DNC) but they had to issue a correction to that (to their credit, folks from NYT reached out to me after I critiqued it on Twitter to try to get the story correct after it was published; I asked for the correction as I’m sure others did but in reading the updated article the correction doesn’t actually address the larger issues so I wanted to cover them here in the blog).

 

Figure 1: Correction Related to P.A.S. and the DNC

Where did they get this assertion that P.A.S. was used in the DNC breach? By tying the GRIZZLYSTEPPE report (which does note that P.A.S. has been used by Russian security service members before) to the DNC breach. The GRIZZLYSTEPPE report has nothing to do with the DNC breach though and was a collection of technical indicators the government compiled from multiple agencies all working different Russian related threat groups. The threat group that compromised the DNC was Russian but not all Russian groups broke into the DNC. The GRIZZLYSTEPPE report was also highly criticized for its lack of accuracy and lack of a clear message and purpose. I covered it here on my blog but that was also picked up by numerous journalists and covered elsewhere. In other words, there’s no excuse for not knowing how widely criticized the GRIZZLYSTEPPE report was before citing it as good evidence in a NYT piece. Interestingly, the journalists didn’t even link to the “Enhanced Analysis” version of the GRIZZLYSTEPPE report which was published afterwards (and is actually much better) as a response to the critiques of the first one.

A major issue exists though with the correction to the NYT article. It changes the entire point of the story. If Profexer isn’t actually a “witness” to the case because P.A.S. wasn’t used in DNC then what’s the message the journalists are trying to get across? Someone who wasn’t working with the Russians, developed a tool that the Russians didn’t use in the DNC case, and didn’t have any insight into any of the Russian threat groups or campaigns cannot be a good witness.

Even after the correction though the journalists draw the readers attention to the breach early and often to continue to reinforce that this gives new insight into that case.

Figure 2: Snippet from NYT Article Referencing DNC Breach and Profexer

And again the journalists explicitly state that Profexer is somehow a witness to what occurred and reference him back again to the election hacking.

Figure 3: Snippet from NYT Article Claiming Profexer is a Witness

The article goes on to note how this changes our thoughts on the Russian groups (APT28 / APT29 or COZYBEAR / FANCYBEAR) and how they operate; the journalists state that using publicly available tools or outsourcing tool development to cyber criminals is against the modus operandi (MO) of the Russian security services. I do not know where the journalists get this claim but they do not source it; I disagree with the claim but I’ll note the burden of proof here is on them with regards to showing where they’re claiming the previous MO and I’ll simply state that there have been numerous publications and reports showcasing Russian threat groups including the security services using other groups and people’s tools and exploits. This isn’t new information and it’s fairly common for many threat groups to operate in this way.

The attribution on APT28 and APT29 is some of the most solid attribution the community has ever done. Numerous cybersecurity firms have covered this group including FireEye, CrowdStrike, Kaspersky, TrendMicro, and F-Secure but we’ve also had government attribution before by the German intelligence services on a breach into their government that pre-dates the DNC breach. A cursory look will reveal that organizations have been tracking this Russian threat group for about a decade now. Yet none of the people who’ve actually covered these groups were cited in the NYT article. Instead the journalists chose to cite Jeffrey Carr and his quote is confusing to most readers because he is trying to detract from the attribution where he states: “there is not now and never has been a single piece of technical evidence produced that connects the malware used in the D.N.C. attack to the G.R.U., F.S.B. or any agency of the Russian government.” It’s almost as the journalists just wanted a contrarian view to look balanced but what an odd selection if not just set up their witness to be even more important.

I want to be very clear on my next critique: I actually don’t think Jeffrey Carr is a bad person. I know he ruffles the feathers of a lot of folks in the community (mine included at times) but on the two occasions I’ve met him in person he’s been an absolutely nice person to me and was civil and well articulated. That being said, he is not an expert on attribution, not an expert on these groups, nor has any reason to be cited in conjunction with them. He’s often widely criticized in the community when he tries to do attribution and it’s often painfully full of 101 intelligence analysis failures. The NYT didn’t do him any favors by including him in this article and seriously detracted from the idea that they understood enough about this topic to cover it. Simply stated: “cyber” is not an expertise, if you are covering a niche topic like attribution or a further niche topic like Russian group attribution you need to use folks who have experience in that subject matter.

Please Stop Arguing About Attribution Without Expertise In It

This is a bit of a big request but it’d be very useful if people stop taking a stance on why attribution is difficult or not and whether or not attribution is right or not if they have never had experience in doing attribution. This is important because the journalists in this article seem to want to help bolster the case against the Russian intelligence services yet make it more confusing. At one point they try to set up their witness as some new smoking gun to be added to the case as a push back to people like President Trump.

Figure 4: Snippet from NYT Article Setting Up the Importance of the “Witness”

Attribution is not about having a smoking gun. Attribution is a good example of doing true intelligence analysis; there are no certainties and you only can come to an assessment such as low, moderate, or high confidence. Almost every single piece of data put forward in that assessment can and should have counters to it. Very reasonable counters as well. It’s why when anyone arguing for attribution argues a single piece of evidence they almost always lose the argument or look silly. It’s simply very rarely about one piece of evidence and is instead the analysis over the total data set. The attribution levied towards Russia for meddling in the U.S. elections is solid. The reason President Trump and others don’t want to accept that has nothing to do with the fact that there hasn’t been a witness or a “single piece of technical evidence produced that connects the malware used in the D.N.C. attack to the G.R.U.” it is because they do not want to accept the conclusion or the reality it presents. There’s nothing that’s going to change this. I’m convinced that if President Putin came out and said “yea it was us” we’d have critics coming forward saying how it’s a false flag operation and it’s actually not true.

But what’s the problem with people arguing these points? It detracts from the already solid assessment. It’s similar to when the FBI wanted to release IP addresses and some technical indicators during the Sony hack to talk about how they knew it was North Korea. I critiqued that approach when it happened here. The basis of my argument was that the FBI’s attribution to North Korea was likely correct but their presentation of evidence as proof was highly misleading. Obviously the FBI didn’t just use those technical indicators to do the attribution, so how could anyone be expected to look at those and be convinced?  And rightfully so people came out and argued against those technical indicators noting they could easily be wrong and that adversaries of any origin could have leveraged the IP addresses for their operations. And the critiques were correct. The technical evidence in isolation was not good. The totality of the data set though was very sound and the analysis on top of it though were very sound.

I often think of this like climate change arguments. You can have 100 scientists with a career in climate studies posit forth an assessment and then two people with absolutely no experience argue on the subject. One of the people arguing for the climate scientists’ position could grab out a single data point to argue and now the person arguing against that first person is arguing against an uninformed opinion on a single data point instead of the combined analysis and work of the scientists. The two people arguing both leave understandably feeling like they won the argument: the original assessment by the scientists was likely right but the person arguing against the data point was also probably right about their argument against that data point. The only people who lost in this debate were the scientists who weren’t involved in the argument and who’s research wasn’t properly presented.

Closing Thoughts

I never like to just rant about things, I try to use these opportunities as things to learn from. All of this is actually extremely relevant to my SANS FOR578 – Cyber Threat Intelligence course so a lot of times I write these blog posts and reference them in class. So with that theme in mind here’s the things I want you to extract from this blog as learning moments (to my students, to the journalists, and to whomever else finds it valuable).

  • If you are doing research/writing on niche topics please find people with expertise in that niche topic (Jeffrey Carr is not an expert on attribution)
  • If you are going to posit that the entire public understanding of a nation-state group’s MO has changed because a single piece of evidence you’re likely wrong (do more homework)
  • If you are going to posit that there is a witness that can change the narrative about a case please talk to people familiar on the case (determine if that type of evidence is even important)
  • If you are going to write on a topic that is highly controversial research the previous controversy first (GRIZZLYSTEPPE was entirely unrelated to the DNC case)
  • Attribution is not done with single pieces of evidence or a smoking gun it is done as analysis on complex data sets most of which is not even technical (attribution is hard but doable)
  • The most interesting data for attribution isn’t highly classified but instead just hard work/analysis on complex scenarios (classification standards don’t imply accuracy or relevancy)
  • Just because someone’s code was used by an adversary does not imply the author knows anything about how it was used or by whom (the threat is the human not the malware)
  • Stop using pew pew maps (seriously just stop; it makes you look like an idiot)

 

Marketing ICS Vulnerabilities and POC Malware – You’re Doing it Wrong

April 30, 2017

There’s been two cases recently of industrial control system (ICS) security firms identifying vulnerabilities and also creating proof of concept (POC) malware for those vulnerabilities. This has bothered me and I want to explore the topic in this blog post; I do not pretend there is a right or wrong answer here but I recognize by even writing the post I am passing judgement on the actions and I’m ok with that. I don’t agree with the actions and in the interest of a more public discussion below is my rationale.

Background:

At the beginning of April 2017 CRITIFENCE an ICS security firm published an article on Security Affairs titled “ClearEnergy ransomware aim to destroy process automation logics in critical infrastructure, SCADA, and industrial control systems.” There’s a good overview of the story here that details how it ended up being a media stunt to highlight vulnerabilities the company found. The TL;DR version of the story is that the firm wanted to highlight the vulnerabilities they found in some Schneider Electric equipment that they dubbed ClearEnergy so they built their own POC malware for it that they also dubbed ClearEnergy. But they published an article on it leaving out the fact that it was POC malware. Or in other words, they led people to believe that this was in-the-wild (real and impacting organizations) malware. I don’t feel there was any malice by the company, as soon as the article was published I reached out to the CTO of CRITIFENCE and he was very polite and responded that he’d edit the article quickly. I wanted to write a blog calling out the behavior and what I didn’t like about it as a learning moment for everyone but the CTO was so professional and quick in his response that I decided against it. However, after seeing a second instance of this type of activity I decided a blog post was in order for a larger community discussion.

On April 27th, 2017 Security Week published an article titled “New SCADA Flaws Allow Ransomware, Other Attacks” based on a presentation by ICS security firm Applied Risk at SecurityWeek’s 2017 Signapore ICS Cyber Security Conference. The talk, and the article, highlighted ICS ransomware that the firm dubbed “Scythe” that targets “SCADA devices.” Applied Risk noted that the attack can take advantage of a firmware validation bypass vulnerability and lock out folks’ ability to update to new firmware. The firm did acknowledge in their presentation though and in the article that this too was POC malware.

 

 

Scythe

Figure: Image from Applied Risk’s POC Malware

Why None of this is Good (In My Opinion):

In my opinion both of these firms have been irresponsible in a couple of ways.

First, CRITIFENCE obviously messed up by not telling anyone that ClearEnergy was POC malware. In an effort to promote their discovery of vulnerabilities they were quick to write an article and publish it and that absolutely contributed to hype and fear. Hype around these types of issues ultimately leads to the ICS community not listening to or trusting the security community (honestly with good reason). However, what CRITIFENCE did do that I liked (besides being responsive which is a major plus) was work through a vulnerability disclosure process that led to proper discussion by the vendor as well as an advisory through the U.S.’ ICS-CERT. In contrast, Applied Risk did not do that so far as I can tell. I do not know what all Applied Risk is doing about the vulnerabilities but they said they contacted the vendors and two of the vendors (according to the SecurityWeek article) acknowledged that the vulnerability is important but difficult to fix. The difference with the Applied Risk vulnerabilities is that the community is left unaware of what devices are impacted, the vendors haven’t been able to address the issues yet, and there are no advisories to the larger community through any appropriate channel. Ultimately, this leaves the ICS community in a very bad spot.

Second, CRITIFENCE and Applied Risk are both making a marketing spectacle out of the vulnerabilities and POC malware. Now, this point is my opinion and not necessarily a larger community best-practice, but I absolutely despise seeing folks name their vulnerabilities or naming POC malware. It comes off as a pure marketing stunt. Vulnerabilities in ICS are not uncommon and there’s good research to be done. Sometimes, the things the infosec community sees as vulnerabilities may have been designed that way on purpose to allow things like firmware updates and password resets for operators who needed to get access to sensitive equipment in time-sensitive scenarios. I’m not saying we can’t do better – but it’s not like the engineering community is stupid (far from it) and highlighting vulnerabilities as marketing stunts can often have unintended consequence including the vendors not wanting to work with researchers or disclose vulnerabilities. There’s no incentive for ICS vendors to work with firms who are going to use issues in their products for marketing for a firm’s security product.

Third, vulnerability disclosure can absolutely have the impact of adversaries learning how to attack devices in ways they did not know previously. I do not advocate for security through obscurity but there is value in following a strict vulnerability disclosure policy even in normal IT environments because this has been an issue for decades. In ICS environments, it can be upwards of 2-3 years for folks to be able to get a patch and apply it after a vulnerability comes out. That is not due to ignorance of the issue or lack of concern for the problem but due to operations constraints in various industries. So in essence, the adversaries get informed about how to do something they previously didn’t know about while system owners can’t accurately address the issues. This makes vulnerability disclosure in the ICS community a very sensitive topic to handle with care. Yelling out to the world “this is the vulnerability and oh by the way here’s exactly how you should leverage it and we even created some fake malware to highlight the value to you as an attacker and what you can gain” is just levels of ridiculousness. It’s why you’ll never see my firm Dragos or myself or anyone on my team finding and disclosing new vulnerabilities in ICS devices to the public. If we ever find anything it’ll only be worked through the appropriate channels and quietly distributed to the right people – not in media sites and conference presentations. I’m not a huge fan of disclosing vulnerabilities at conferences and in media in general but I do want to acknowledge that it can be done correctly and I have seen a few firms (Talos, DigitalBond, and IOActive come to mind) do it very well. As an example, Eireann Leverett and Colin Cassidy found vulnerabilities in industrial Ethernet switches and worked closely with the vendors to address them. After working through a very intensive process they wanted to do a series of conference presentations about them to highlight the issues. They invited me to take part to show what could be done from a defense perspective. So I stayed out of the “here’s the vulnerabilities” and instead focused on “these exists so what can defenders do besides just patch.” I took part in that research because the work was so important and Eireann and Colin were so professional in how they went about it. It was a thrill to use the entire process as a learning opportunity to the larger community. Highlighting vulnerabilities and creating POC malware for something that doesn’t even have an advisory or where the vendor hasn’t made patches yet just isn’t appropriate.

Closing Thoughts:

There is a lot of research to be done into ICS and how to address the vulnerabilities that these devices have. Vendors must get better at following best-practices for developing new equipment, software, and networking protocols. And there are good case-studies of what to do and how to carry yourself in the ICS security researcher community (Adam Crain and Chris Sistrunk’s research into DNP3 and all the things that it led to is a fantastic example of doing things correctly to address serious issues). But the focus on turning vulnerabilities into marketing material, discussing vulnerabilities in media and at conferences before vendors have addressed them and before the community can get an advisory through proper channels, and creating/marketing POC malware to draw attention to your vulnerabilities is simply, in my opinion, irresponsible.

Try these practices instead:

  • Disclose vulnerabilities to the impacted vendors and work with them to address them
    • If they decide that they will not address the issue or do not see the problem talk to impacted asset owners/operators to ensure what you see as a vulnerability is an issue that will introduce risk to the community and use appropriate channels such as the ICS-CERT to push the vendor or develop defensive/detection signatures and bring it to the community’s attention; sometimes you’re left without a lot of options but make sure you’ve exhausted the good options first
  • After the advisory is available (for some time you feel comfortable with) if you or your firm would like to highlight the vulnerabilities at a conference or in the media that’s your choice
    • I would encourage focusing the discussion on what people can do besides just patch such as how to detect attacks that might leverage the vulnerabilities
  • Avoid naming your vulnerabilities, there’s already a whole official process for cataloging vulnerabilities
  • (In my opinion) do not make POC malware showing adversaries what they can do and why they should do it (the argument “the adversaries already know” is wrong in most cases)
  • If you decide to make POC malware anyway at least avoid naming it and marketing it (comes off as an extremely dirty marketing approach)
  • Avoid hyping up the impact (talking about power grids coming down and terrorist attacks in the same article is just a ridiculous attempt to illicit fear and attention)

In my experience, ICS vendors are difficult to work with at times because they have other priorities too but they care and want to do the right thing. If you are persistent you can move the community forward. But the vendors of the equipment are not the enemy and they will absolutely blacklist researchers, firms, and entire groups of folks for doing things that are adverse to their business instead of working with them. Research is important, and if you want to go down the route of researching and disclosing vulnerabilities there’s value there and proper ways to do it. If you’re interested in vulnerability disclosure best practices in the larger community check out Katie Moussouris who is a leading authority on bug bounty and vulnerability disclosure programs. But please, stop naming your vulnerabilities, building marketing campaigns around them, and creating fake malware because you don’t think you’re getting enough attention already.

Analytical Leaps and Wild Speculation in Recent Reports of Industrial Cyber Attacks

December 31, 2016

“Judgement is what analysts use to fill gaps in their knowledge. It entails going beyond the available information and is the principal means of coping with uncertainty. It always involves an analytical leap, from the known into the uncertain.”

– Chapter 4, Psychology of Intelligence Analysis, Richards J. Heuer.

 

Analytical leaps, as Richards J. Heuer said in his must-read book Psychology of Intelligence Analysis, are part of the process for analysts. Sometimes though these analytical leaps can be dangerous, especially when they are biased, misinformed, presented in a misleading way, or otherwise just not made using sound analytical processes. Analytical leaps should be backed by evidence or at a minimum should include evidence leading up to the analytical leap. Unfortunately, as multiple analytical leaps are made in series it can lead to entirely wrong conclusions and wild speculation. There have been three interesting stories relating to industrial attacks this December as we try to close out 2016 that are worth exploring in this topic. It is my hope that looking at these three cases will help everyone be a bit more critical of information before alarmism sets in.

The three cases that will be explored are:

  • IBM Managed Services’ claim of “Attacks Targeting Industrial Control Systems (ICS) Up 110%”
  • CyberX’s claim that “New Killdisk Malware Brings Ransomware Into Industrial Domain”
  • The Washington Post’s claim that “Russian Operation Hacked a Vermont Utility, Showing Risk to U.S. Electrical Grid Security, officials say”

 

“Attacks Targeting Industrial Control Systems (ICS) Up 110%”

I’m always skeptical of metrics that have no immediately present quantification. As an example, the IBM Managed Security Services posted an article stating that “attacks targeting industrial control systems increased over 110 percent in 2016 over last year’s numbers as of Nov. 30.” But there is no data in the article to quantify what that means. Is 110% increase an increase from 10 attacks to 21 attacks? Or is it 100 attacks increased to 210 attacks?

The only way to understand what that percentage means is to leave this report and go download the IBM report from last year and read through it (never make your reader jump through extra hoops to get information that is your headline). In their 2015 report IBM states that there were around 1,300 attacks in 2015 (Figure 1). This would mean that in 2016 IBM is reporting they saw around 2,700 ICS attacks.

figure1

Figure 1: Figure from IBM’s 2015 Report on ICS Attacks

 

However, there are a few questions that linger. First, this is a considerable jump from what they were tracking previously and from their 2014 metrics. IBM states that the “spike in ICS traffic was related to SCADA brute-force attacks, which use automation to guess default or weak passwords.” This is an analytical leap that they make based on what they’ve observed. But, it would be nice to know if anything else has changed as well. Did they bring up more sensors, have more customers, increase staffing, etc. as the stated reason for the increase would not alone be responsible.

Second, how is IBM defining an attack. Attacks in industrial contexts have very specific meaning – an attempt to brute-force a password simply wouldn’t qualify. They also note that a pentesting tool on GitHub was released in Jan 2016 that could be used against the ICS protocol Modbus. IBM states that the increase in metrics was likely related to this tools’ release. It’s speculation though as they do not give any evidence to support their claim. However, it leads to my next point.

Third, is this customer data or is this honeypot data? If it’s customer data is it from the ICS or simply the business networks of industrial companies? And if it’s honeypot data it would be good to separate that data out as it’s often been abused to misreport “SCADA attack” metrics. From looking at the discussion of brute-force logins and a pentesting tool for a serial protocol released on GitHub, my speculation is that this is referring mostly to honeypot data. Honeypots can be useful but must be used in specific ways when discussing industrial environments and should not be lumped into “attack” data from customer networks.

The article also makes another analytical leap when it states “The U.S. was also the largest target of ICS-based attacks in 2016, primarily because, once again, it has a larger ICS presence than any other country at this time.” The leap does not seem informed by anything other than the hypothesis that the US has more ICS. Also, again there is no quantification. As an example, where is this claim coming from, how much larger is the ICS presence than other countries, and are the quantity of attacks proportional to the US ICS footprint when compared to other nations’ quantity of industrial systems? I would again speculate that what they are observing has far more to do with where they are collecting data (how many sensors do they have in the US compared to China as an example).

In closing out the article IBM cites three “notable recent ICS attacks.” The three case studies chosen were the SFG malware that targeted an energy company, the New York dam, and the Ukrainian power outage. While the Ukrainian power outage is good to highlight (although they don’t actually highlight the ICS portion of the attack), the other two cases are poor choices. As an example, the SFG malware targeting an energy company is something that was already debunked publicly and would have been easy to find prior to creating this article. The New York dam was also something that was largely hyped by media and was publicly downplayed as well. More worrisome is that the way IBM framed the New York dam “attack” is incorrect. They state: “attackers compromised the dam’s command and control system in 2013 using a cellular modem.” Except, it wasn’t the dam’s command and control system it was a single read-only human machine interface (HMI) watching the water level of the dam. The dam had a manual control system (i.e. you had to crank it to open it).

Or more simply put: the IBM team is likely doing great work and likely has people who understand ICS…you just wouldn’t get that impression from reading this article. The information is largely inaccurate, there is no quantification to their numbers, and their analytical leaps are unsupported with some obvious lingering questions as to the source of the data.

 

“New Killdisk Malware Brings Ransomware Into Industrial Domain”

CyberX released a blog noting that they have “uncovered new evidence that the KillDisk disk-wiping malware previously used in the cyberattacks against the Ukrainian power grid has now evolved into ransomware.” This is a cool find by the CyberX team but they don’t release digital hashes or any technical details that could be used to help validate the find. However, the find isn’t actually new (I’m a bit confused as to why CyberX states they uncovered this new evidence when they cite in their blog an ESET article with the same discovery from weeks earlier. I imagine they found an additional strain but they don’t clarify that). ESET had disclosed the new variant of KillDisk being used by a group they are calling the TeleBots gang and noted they found it being used against financial networks in Ukraine. So, where’s the industrial link? Well, there is none.

CyberX’s blog never details how they are making the analytical leap from “KillDisk now has a ransomware functionality” to “and it’s targeting industrial sites.” Instead, it appears the entire basis for their hypothesis is that Sandworm previously used KillDisk in the Ukraine ICS attack in 2015. While this is true, the Sandworm team has never just targeted one industry. iSight and others have long reported that the Sandworm team has targeted telecoms, financial networks, NATO sites, military personnel, and other non-industrial related targets. But it’s also not known for sure that this is still the Sandworm team. The CyberX blog does not state how they are linking Sandworm’s attacks on Ukraine to the TeleBots usage of ransomware. Instead they just cite ESET’s assessment that the teams are linked. But ESET even stated they aren’t sure and it’s just an assessment based off of observed similarities.

Or more simply put: CyberX put out a blog saying they uncovered new evidence that KillDisk had evolved into ransomware although they cite ESET’s discovery of this new evidence from weeks prior with no other evidence presented. They then make the claim that the TeleBots gang, the one using the ransomware, evolved from Sandworm but they offer no evidence and instead again just cite ESET’s assessment. They offer absolutely no evidence that this ransomware Killdisk variant has targeted any industrial sites. The logic seems to be “Sandworm did Ukraine, KillDisk was in Ukraine, Sandworm is TeleBots gang, TeleBots modified Killdisk to be ransomware, therefore they are going to target industrial sites.” When doing analysis always be aware of Occam’s razor and do not make too many assumptions to try to force a hypothesis to be true. There could be evidence of ransomware targeting industrial sites, it does make sense that they would eventually. But no evidence is offered in this article and both the title and thesis of the blog are completely unfounded as presented.

 

“Russian Operation Hacked a Vermont Utility, Showing Risk to U.S. Electrical Grid Security, officials say”

This story is more interesting than the others but too early to really know much. The only thing known at this point is that the media is already overreacting. The Washington Post put out an article on a Vermont utility getting hacked by a Russian operation with calls from the Vermont Governor condemning Vladimir Putin for attempting to hack the grid. Eric Geller pointed out that the first headline the Post ran with was  “Russian hackers penetrated U.S. electricity grid through utility in Vermont, officials say” but they changed to “Russian operation hacked a Vermont utility, showing risk to U.S. electrical grid, officials say.” We don’t know exactly why it was changed but it may have been due to the Post overreacting when they heard the Vermont utility found malware on a laptop and simply assumed it was related to the electric grid. Except, as the Vermont (Burlington) utility pointed out the laptop was not connected to the organization’s grid systems.

Electric and other industrial facilities have plenty of business and corporate network systems that are often not connected to the ICS network at all. It’s not good for them to get infected, and they aren’t always disconnected, but it’s not worth alarming anyone over without additional evidence.  However, the bigger analytical leap being made is that this is related to Russian operations.

The utility notes that they took the DHS/FBI GRIZZLY STEPPE report indicators and found a piece of malware on the laptop. We do not know yet if this is a false positive but even if it is not there is no evidence yet to say that this has anything to do with Russia. As I pointed out in a previous blog, the GRIZZLY STEPPE report is riddled with errors and the indicators put out were very non-descriptive data points. The one YARA rule they put out, which the utility may have used, was related to a piece of malware that is publicly downloadable meaning anyone could use it. Unfortunately, after the story ran with its hyped-up headlines Senator Patrick Leahy released a statement condemning the “attempt to penetrate the electric grid” as a state-sponsored hack by Russia. As Dimitri Alperovitch, CTO of CrowdStrike who responded to the Russian hack of the DNC, pointed out “No one should be making attribution conclusions purely from the indicators in the USCERT report. It was all a jumbled mess.”

Or more simply put: a Vermont utility acted appropriately and ran indicators of compromise from the GRIZZLY STEPPE report as the DHS/FBI instructed the community to do. This led to them finding a match to the indicator on a laptop separated from the grid systems but it’s not yet been confirmed that malware was present. The Vermont Governor Peter Shumlin then publicly chastised Vladimir Putin and Russia for trying to hack the electric grid. U.S. officials then inappropriately gave additional information and commentary to the Washington Post about an ongoing investigation which lead them to run with the headline that the this was a Russian operation. After all, the indicators supposedly were related to Russia because the DHS and FBI said so – and supposedly that’s good enough. Unfortunately, this also led a U.S. Senator to come out and condemn Russia for state-sponsored hacking of the utility.

Closing Thoughts

There are absolutely threats to industrial environments including ICS/SCADA networks. It does make sense that ICS breaches and attacks would be on the rise especially as these systems become more interconnected. It also makes perfect sense that ransomware will be used in industrial environments just like any other environment that has computer systems. And yes, the attribution to Russia compromising the DNC is very solid based on private sector data with government validation. But, to make claims about attacks and attempt to quantify it you actually have to present real data and where that data is coming from and how it was collected. To make claims of new ransomware targeting industrial networks you have to actually provide evidence not simply make a series of analytical leaps. To start making claims of attribution to a state such as Russia just because some poorly constructed indicators alerted on a single laptop is dangerous.

Or more simply put: be careful of analytical leaps especially when they are made without presenting any evidence leading into them. Hypotheses and analysis requires evidence else it is simply speculation. We have enough speculation already in the industrial industry and more will only lead to increasingly dangerous or embarrassing scenarios such as a US governor and senator condemning Russia for hacking the electric grid and scaring the public in the process when we simply do not have many facts about the situation yet.

Critiques of the DHS/FBI’s GRIZZLY STEPPE Report

December 30, 2016

On December 29th, 2016 the White House released a statement from the President of the United States (POTUS) that formally accused Russia of interfering with the US elections, amongst other activities. This statement laid out the beginning of the US’ response including sanctions against Russian military and intelligence community members.  The purpose of this blog post is to specifically look at the DHS and FBI’s Joint Analysis Report (JAR) on Russian civilian and military Intelligence Services (RIS) titled “GRIZZLY STEPPE – Russian Malicious Cyber Activity”. For those interested in a discussion on the larger purpose of the POTUS statement and surrounding activity take a look at Thomas Rid’s and Matt Tait’s Twitter feeds for good commentary on the subject.

Background to the Report

For years there has been solid public evidence by private sector intelligence companies such as CrowdStrike, FireEye, and Kaspersky that has called attention to Russian-based cyber activity. These groups have been tracked for a considerable amount of time (years) across multiple victim organizations. The latest high profile case relevant to the White House’s statement was CrowdStrike’s analysis of COZYBEAR and FANCYBEAR breaking into the DNC and leaking emails and information. These groups are also known by FireEye as the APT28 and APT29 campaign groups.

The White House’s response is ultimately a strong and accurate statement. The attribution towards the Russian government was confirmed by the US government using their sources and methods on top of good private sector analysis. I am going to critique aspects of the DHS/FBI report below but I want to make a very clear statement: POTUS’ statement, the multiple government agency response, and the validation of private sector intelligence by the government is wholly a great response. This helps establish a clear norm in the international community although that topic is best reserved for a future discussion.

Expectations of the Report

Most relevant to this blog, the lead in to the DHS/FBI report was given by the White House in their fact sheet on the Russian cyber activity (Figure 1).

 

figure1

 

Figure 1: White House Fact Sheet in Response to Russian Cyber Activity

The fact sheet lays out very clearly the purpose of the DHS/FBI report. It notes a few key points:

  • The report is intended to help network defenders; it is not the technical evidence of attribution
  • The report contains a combination of private sector data and declassified government data
  • The report will help defenders identify and block Russian malware – this is specifically declassified government data not private sector data
  • The report goes beyond indicators to include new tradecraft and techniques used by the Russian intelligence services

If anyone is like me, when I read the above I became very excited. This was a clear statement from the White House that they were going to help network defenders, give out a combination of previously classified data as well as validate private sector data, release information about Russian malware that was previously classified, and detail new tactics and techniques used by Russia. Unfortunately, while the intent was laid out clearly by the White House that intent was not captured in the DHS/FBI report.

Because what I’m going to write below is blunt feedback I want to note ahead of time, I’m doing this for the purpose of the community as well as government operators/report writers who read to learn and become better. I understand that it is always hard to publish things from the government. In my time working in the U.S. Intelligence Community on such cases it was extremely rare that anything was released publicly and when it was it was almost always disappointing as the best material and information had been stripped out. For that reason, I want to especially note, and say thank you, to the government operators who did fantastic work and tried their best to push out the best information. For those involved in the sanitation of that information and the report writing – well, read below.

DHS/FBI’s GRIZZLY STEPPE Report – Opportunities for Improvement

Let’s explore each main point that I created from the White House fact sheet to critique the DHS/FBI report and show opportunities for improvement in the future.

 The report is intended to help network defenders; it is not the technical evidence of attribution

There is no mention of the focus of attribution in any of the White House’s statements. Across multiple statements from government officials and agencies it is clear that the technical data and attribution will be a report prepared for Congress and later declassified (likely prepared by the NSA). Yet, the GRIZZLY STEPPE report reads like a poorly done vendor intelligence report stringing together various aspects of attribution without evidence. The beginning of the report (Figure 2) specifically notes that the DHS/FBI has avoided attribution before in their JARs but that based off of their technical indicators they can confirm the private sector attribution to RIS.

 

figure2

Figure 2: Beginning of DHS/FBI GRIZZLY STEPPE JAR

The next section is the DHS/FBI description which is entirely focused on APT28 and APT29’s compromise of “a political party” (the DNC). Here again they confirm attribution (Figure 3).

 

figure3

Figure 3: Description Section of DHS/FBI GRIZZLY STEPPE JAR

But why is this so bad? Because it does not follow the intent laid out by the White House and confuses readers to think that this report is about attribution and not the intended purpose of helping network defenders. The public is looking for evidence of the attribution, the White House and the DHS/FBI clearly laid out that this report is meant for network defense, and then the entire discussion in the document is on how the DHS/FBI confirms that APT28 and APT29 are RIS groups that compromised a political party. The technical indicators they released later in the report (which we will discuss more below) are in no way related to that attribution though.

Or said more simply: the written portion of the report has little to nothing to do with the intended purpose or the technical data released.

Even worse, page 4 of the document notes other groups identified as RIS (Figure 4). This would be exciting normally. Government validation of private sector intelligence helps raise the confidence level of the public information. Unfortunately, the list in the report detracts from the confidence because of the interweaving of unrelated data.

 

figure4

Figure 4: Reported RIS Names from DHS/FBI GRIZZLY STEPPE Report

As an example, the list contains campaign/group names such as APT28, APT29, COZYBEAR, Sandworm, Sofacy, and others. This is exactly what you’d want to see although the government’s justification for this assessment is completely lacking (for a better exploration on the topic of naming see Sergio Caltagirone’s blog post here). But as the list progresses it becomes worrisome as the list also contains malware names (HAVEX and BlackEnergy v3 as examples) which are different than campaign names. Campaign names describe a collection of intrusions into one or more victims by the same adversary. Those campaigns can utilize various pieces of malware and sometimes malware is consistent across unrelated campaigns and unrelated actors. It gets worse though when the list includes things such as “Powershell Backdoor”. This is not even a malware family at this point but instead a classification of a capability that can be found in various malware families.

Or said more simply: the list of reported RIS names includes relevant and specific names such as campaign names, more general and often unrelated malware family names, and extremely broad and non-descriptive classification of capabilities. It was a mixing of data types that didn’t meet any objective in the report and only added confusion as to whether the DHS/FBI knows what they are doing or if they are instead just telling teams in the government “contribute anything you have that has been affiliated with Russian activity.”

 

The report contains a combination of private sector data and declassified government data

This is a much shorter critique but still an important one: there is no way to tell what data was private sector data and what was declassified government data. Different data types have different confidence levels. If you observe a piece of malware on your network communicating to adversary command and control (C2) servers you would feel confident using that information to find other infections in your network. If someone randomly passed you an IP address without context you might not be sure how best to leverage it or just generally cautious to do so as it might generate alerts of non-malicious nature and waste your time investigating it. In the same way, it is useful to know what is government data from previously classified sources and what is data from the private sector and more importantly who in the private sector. Organizations will have different trust or confidence levels of the different types of data and where it came from. Unfortunately, this is entirely missing. The report does not source its data at all. It’s a random collection of information and in that way, is mostly useless.

Or said more simply: always tell people where you got your data, separate it from your own data which you have a higher confidence level in having observed first hand, and if you are using other people’s campaign names, data, analysis, etc. explain why so that analysts can do something with it instead of treating it as random situational awareness.

 

The report will help defenders identify and block Russian malware – this is specifically declassified government data not private sector data

The lead in to the report specifically noted that information about the Russian malware was newly declassified and would be given out; this is in contrary to other statements that the information was part private sector and part government data. When looking through the technical indicators though there is little context to the information released.

In some locations in the CSV the indicators are IP addresses with a request to network administrators to look for it and in other locations there are IP addresses with just what country it was located in. This information is nearly useless for a few reasons. First, we do not know what data set these indicators belong to (see my previous point, are these IPs for “Sandworm”, “APT28” “Powershell” or what?). Second, many (30%+) of these IP addresses are mostly useless as they are VPS, TOR exit nodes, proxies, and other non-descriptive internet traffic sites (you can use this type of information but not in the way being positioned in the report and not well without additional information such as timestamps). Third, IP addresses as indicators especially when associated with malware or adversary campaigns must contain information around timing. I.e. when were these IP addresses associated with the malware or campaign and when were they in active usage? IP addresses and domains are constantly getting shuffled around the Internet and are mostly useful when seen in a snapshot of time.

But let’s focus on the malware specifically which was laid out by the White House fact sheet as newly declassified information. The CSV does contain information for around 30 malicious files (Figure 5). Unfortunately, all but two have the same problems as the IP addresses in that there isn’t appropriate context as to what most of them are related to and when they were leveraged.

 

figure5

Figure 5: CSV of Indicators from the GRIZZLY STEPPE Report

What is particularly frustrating is that this might have been some of the best information if done correctly. A quick look in VirusTotal Intelligence reveals that many of these hashes were not being tracked previously as associated to any specific adversary campaign (Figure 6). Therefore, if the DHS/FBI was to confirm that these samples of malware were part of RIS operations it would help defenders and incident responders prioritize and further investigate these samples if they had found them before. As Ben Miller pointed out, this helps encourage folks to do better root cause analysis of seemingly generic malware (Figure 6).

figure6

Figure 6: Tweet from Ben Miller on GRIZZLY STEPPE Malware Hashes

So what’s the problem? All but the two hashes released that state they belong to the OnionDuke family do not contain the appropriate context for defenders to leverage them. Without knowing what campaign they were associated with and when there’s not appropriate information for defenders to investigate these discoveries on their network. They can block the activity (play the equivalent of whack-a-mole) but not leverage it for real defense without considerable effort. Additionally, the report specifically said this was newly declassified information. However, looking the samples in VirusTotal Intelligence (Figure 7) reveals that many of them were already known dating back to April 2016.

 

figure7

Figure 7: VirusTotal Intelligence Lookup of One Digital Hash from GRIZZLY STEPPE

The only thing that would thus be classified about this data (note they said newly declassified and not private sector information) would be the association of this malware to a specific family or campaign instead of leaving it as “generic.” But as noted that information was left out. It’s also not fair to say it’s all “RIS” given the DHS/FBI’s inappropriate aggregation of campaign, malware, and capability names in their “Reported RIS” list. As an example, they used one name from their “Reported RIS” list (OnionDuke) and thus some of the other samples might be from there as well such as “Powershell Backdoor” which is wholly not descriptive. Either way we don’t know because they left that information out. Also as a general pet peeve, the hashes are sometimes given as MD5, sometimes as SHA1, and sometimes as SHA256. It’s ok to choose whatever standard you want if you’re giving out information but be consistent in the data format.

Or more simply stated: the indicators are not very descriptive and will have a high rate of false positives for defenders that use them. A few of the malware samples are interesting and now have context (OnionDuke) to their use but the majority do not have the required context to make them useful without considerable effort by defenders. Lastly, some of the samples were already known and the government information does not add any value – if these were previously classified it is a perfect example of over classification by government bureaucracy.

 

The report goes beyond indicators to include new tradecraft and techniques used by the Russian intelligence services

The report was to detail new tradecraft and techniques used by the RIS and specifically noted that defenders could leverage this to find new tactics and techniques. Except – it doesn’t. The report instead gives a high-level overview of how APT28 and APT29 have been reported to operate which is very generic and similar to many adversary campaigns (Figure 8). The tradecraft and techniques presented specific to the RIS include things such as “using shortened URLs”, “spear phishing”, “lateral movement”, and “escalating privileges” once in the network. This is basically the same set of tactics used across unrelated campaigns for the last decade or more.

 

figure8

Figure 8: APT28 and APT29 Tactics as Described by DHS/FBI GRIZZLY STEPPE Report

This description in the report wouldn’t be a problem for a more generic audience. If this was the DHS/FBI trying to explain to the American public how attacks like this were carried out it might even be too technical but it would be ok. The stated purpose though was for network defenders to discover new RIS tradecraft. With that purpose, it is not technical or descriptive enough and is simply a rehashing of what is common network defense knowledge. Moreover, if you would read a technical report from FireEye on APT28 or APT29 you would have better context and technical information to do defense than if you read the DHS/FBI document.

Closing Thoughts

The White House’s response and combined messaging from the government agencies is well done and the technical attribution provided by private sector companies has been solid for quite some time. However, the DHS/FBI GRIZZLY STEPPE report does not meet its stated intent of helping network defenders and instead choose to focus on a confusing assortment of attribution, non-descriptive indicators, and re-hashed tradecraft. Additionally, the bulk of the report (8 of the 13 pages) is general high level recommendations not descriptive of the RIS threats mentioned and with no linking to what activity would help with what aspect of the technical data covered. It simply serves as an advertisement of documents and programs the DHS is trying to support. One recommendation for Whitelisting Applications might as well read “whitelisting is good mm’kay?”  If that recommendation would have been overlaid with what it would have stopped in this campaign specifically and how defenders could then leverage that information going forward it would at least have been descriptive and useful. Instead it reads like a copy/paste of DHS’ most recent documents – at least in a vendor report you usually only get 1 page of marketing instead of 8.

This ultimately seems like a very rushed report put together by multiple teams working different data sets and motivations. It is my opinion and speculation that there were some really good government analysts and operators contributing to this data and then report reviews, leadership approval processes, and sanitation processes stripped out most of the value and left behind a very confusing report trying to cover too much while saying too little.

We must do better as a community. This report is a good example of how a really strong strategic message (POTUS statement) and really good data (government and private sector combination) can be opened to critique due to poor report writing.

 

Update:

The DHS released an updated version which I thought did a great job of analysis; my analysis of it can be found here: https://www.sans.org/webcasts/104402

New Suspected Cyber Attack on Ukraine Power Grid – Advice as Information Emerges

December 19, 2016

Reporting in Ukraine has emerged indicating another suspected cyber attack on the electric grid (the first being the confirmed one in 2015). Initial reporting is often inaccurate or a small view of incidents but it’s worth cautiously watching and seeing what information emerges. Here’s what we know so far:

Reports of Suspected Cyber Attack:
Around noon of December 19th, 2016 reports began to surface related to a possible cyber attack on the Ukraine electric grid. The attack is suspected to have taken place near midnight local Ukraine time on the 17th. The Pivnichna transmission-level substations have been called out as possibly being the site attacked.  This is of course concerning for numerous reasons including the cyber attack on the Ukraine grid in December 2015 as well as traditional ongoing military actions in Ukraine. The reporting is from various Ukrainian sources including a press release from the impacted company Kyivenergo confirming that there was an unintentional outage and that they took actions to restore operations.

Analysis:
The first 24 and often 48 hours of reporting are notoriously bad for OSINT analysts but still should be utilized. Simply leverage caution and do not present information as facts yet. At this point I would assess with low confidence that the cyber attack has occurred. This is not to say there is doubt around the event only that there are other theories that have equal weighting until more evidence is available.  However, based on the sourcing of the information (internal Ukraine sources) and the Ukrainian grid operators’ experience dealing with a similar situation last year I have a higher trust level of the sources (thus the low confidence assessment that the attack is real). We will learn more later and it may be revealed that the outage was not related to a cyber attack; however I am aware of an investigation on going by Ukrainian authorities and they are treating the leading theory for the outage as a cyber attack.  I will caution again though that no one with direct knowledge of the attack has confirmed that it is a cyber attack; only that it is the leading theory and the disconnect was unintentional.

What Should Be Done:
Right now the best actions for those not on the ground or working at infrastructure companies is to wait and see if more information is revealed. Journalists should be cautious to infer or jump to conclusions and those in security community should stay tuned for more information. I would recommend journalists contact sources in the area but realize that the information is very preliminary and those not on the ground in Ukraine will have very little to add to knowledge on the situation.

If you are in the infrastructure (ICS/SCADA) security community it would be wise to use established channels to send decision makers a situational awareness report on the news; I would note it’s a low confidence assessment currently due to lack of first hand evidence but that it is a situation worth watching. This should be paired with security staff taking an active defense posture of monitoring the ICS network looking for abnormal activity. Preliminary information from the investigation underway by the Ukrainian authorities indicates that a remote attack is suspected.  I would stay far away from linking this to the Sandworm attack currently (attribution right now is not possible) but I would review the methods they achieved the remote attack on Ukraine last year and use that information to hunt for threats. As an example, look in logs for abnormal VPN session length, increased frequency of use, and unusual connection requests times.

If you happen to be a customer of Dragos, Inc. you will have received a notification already with some recommendations for strategic, operational, and tactical level players. Check your portal and be on the look out for a briefing request coming from us if you would like to attend remotely. For the wider community ensure that you are wary of phishing attempts taking advantage of this possible attack.

In Closing:
My chief recommendation is for everyone to avoid alarmism and utilize this as an opportunity to review logs and information from the ICS and search TTPs we’ve seen before such as remote usage of the ICS through legitimate accounts, VPNs, and remote desktop capabilities. If this attack turns out to be true it is unlikely it will be anything that is novel that couldn’t have been detected. It’s important to remember that defense is doable – now go do it.

Threats of Cyber Attacks Against Russia: Rationale on Discussing Operations and the Precedent Set

November 6, 2016

Reports that the U.S. government has military hackers ready to carry out attacks on Russian critical infrastructure has elicited a wide range of responses on social media. After I tweeted the NBC article a number of people responded with how stupid the U.S. was for releasing this information, or what poor OPSEC it was to discuss these operations, and even how this constitutes an act of war. I want to use this blog to put forth some thoughts of mine on those specific claims. However, I want to note in advance this is entirely my opinion. I wouldn’t consider this quality analysis or even insightful commentary but instead just my thoughts on the matter that I felt compelled to share since I work in critical infrastructure cyber security and was at one point a “military hacker.”

The Claim

The claim stems from an NBC article and notes that a senior U.S. intelligence official shared top-secret documents with NBC News. These top-secret documents apparently indicated that the U.S. has “penetrated Russia’s electric grid, telecommunications networks and the Kremlin’s command systems, making them vulnerable to attack by secret American cyber weapons should the U.S. deem it necessary.” I’m going to make the assumption that this was a controlled leak given the way that it was presented. Additionally, I make this assumption because of the senior officials that were interviewed for the wider story including former NATO commander (ret) ADM James G. Stavridis and former CYBERCOM Judge Advocate (ret) COL Gary Brown who likely would not have touched a true “leak” driven story without some sort of blessing to do so. I.e. before anyone adds that this is some sort of mistake this was very likely authorized by the President at the request of senior officials or advisers such as the Director of National Intelligence or the National Security Council. The President is the highest authority for deeming material classified or not and if he decided to release this information it’s an authorized leak. Going off of this assumption let’s consider three claims that I’ve seen recently.

The U.S. is Stupid for Releasing This Information

It is very difficult to know the rationale behind actions we observe. This is especially true in cyber intrusions and attacks. If an adversary happens to deny access to a server, did they intend to or was it accidentally brought down while performing other actions? Did the adversary intend to leave behind references to file paths and identifying information or was it a mistake? These debates around intent and observations is a challenge for many analysts that must be carefully overcome. In this case it is no different.

Given the assumption that this is a controlled leak it was obviously done with the intention of one or more outcomes. In other words, the U.S. government wanted the information out and their rationale is likely as varied as the members involved. While discussing a “government” it’s important to remember that the decision was ultimately in the hands of individuals, likely two dozen at the most. Their recommendations, biases, views on the world, insight, experience, etc. all contribute to what they expect the output of this leak to manifest as. This makes it even more difficult to assess why a government would do something since it’s more important to know the key members in the Administration, the military, and the Intelligence Community and their motivations rather than the historical understanding of government operations and similar decisions. Considering the decision was likely not ‘stupid’ and more for some intended purpose let’s explore what two of those purposes might be:

Deterrence

I’m usually not the biggest fan of deterrence in the digital domain as it has since not been very effective and the qualities to have a proper deterrent (credible threat and an understood threshold) are often lacking. Various governments lament about red lines and actions they might do if those red lines are crossed but what exactly those red lines are and what the response action will be if they are crossed is usually never explored. Here however, the U.S. government has stated a credible threat: the disruption of critical infrastructure in Russia (the U.S. has shown before that they are capable of doing this). They have combined this with a clear threshold of what they do not want their potential adversary to do: do not disrupt the elections. For these reasons my normal skepticism around deterrence is lessened. However, in my own personal opinion this is potentially a side effect and not the primary purpose especially given the form of communication that was chosen.

Voter Confidence

Relations between Russia and the U.S. this election have been tense. Posturing and messaging between the two states has taken a variety of forms both direct and indirect. This release to NBC though is interesting as it would be indirect messaging if positioned to the Russian government but it would be direct messaging if intended for the U.S. voters. My personal opinion (read: speculation) is that it is much more intended for the voters. At one point in the article NBC notes that Administration officials revealed to them that they delivered a “back channel warning to Russia against any attempt to influence next week’s vote”. There’s no reason to reiterate a back channel message in a public article unless the intended audience (in this case the voters) weren’t aware of the back channel warning. The article reads as an effort by the Administration to tell the voters: “don’t worry and go vote, we’ve warned them that any effort to disrupt the elections will be met with tangible attacks instead of strongly worded letters.”

It’s really interesting that this type of messaging to the American public is needed. Cyber security has never been such a mainstream topic before especially not during an election. This may seem odd to those in the information security community who live with these discussions on a day to day basis anyway. But coverage of cyber security has never before been mainstream media worthy for consistent periods of time. CNN, Fox, MSNBC, and the BBC have all been discussing cyber security throughout the recent election season ranging from the DNC hacks to Hillary’s emails. That coverage has gotten fairly dark though with CNN, NBC, Newsweek, and New York Times articles like this one and prime time segments telling voters that the election could be manipulated by Russian spies.

This CNN piece directly calls out the Kremlin for potentially manipulating the elections in a way that combines it with Trump’s claims that the election is rigged. This is a powerful combination. There is a significant portion of Trump’s supporters who will believe his claim of a rigged election and in conjunction with the belief that Russia is messing with the election it’s easy to see how a voter could become disillusioned with the election. Neither the Democrats or Republicans want less voters to turn out and (almost) all of those on both sides want the peaceful transition of power after the election as has always occurred before. Strong messaging from the Administration and others into mainstream news media is important to restore confidence to voters both in the election itself as well as the manner to which people vote.

Unfortunately, it seems that this desire is being accidentally countered by some in the security community. In very odd timing, Cylance decided to release a press release on vulnerabilities in voting machines on the same day, unbeknownst to them, as the NBC article. The press release stated that the intent of the release was to encourage mitigation of the vulnerabilities but with 4 days until the election, as of the article’s release, that simply will not be possible. The move is likely very well intended but unlikely to give voters much confidence in the manner to which they vote. I’ll avoid a tangent here but it’s worth mentioning the impact security companies can play on larger political discussions.

The Leak is Bad OPSEC

I will not spend as much time on this claim as I did the previous but it is worth noting the reaction that releasing this type of information is bad operational security. Operational security is often very important to ensure that government operations can be coordinated effectively without the adversary having the advance warning required to defend against the operation. However, in this case the intention of the leak is likely much more around deterrence or voter confidence and therefore the operation itself is not the point. Keeping the operation secret would not have helped either potential goal. More importantly, compromising information systems is not something that has ever been see as insurmountably difficult. For the U.S. government to reveal that it has compromised Russian systems does not magically make them more secure now. Russian defense personnel do not have anything more to go off of than before in terms of searching for the compromise, they likely already assumed they were compromised, and looking for a threat and cleaning it up across multiple critical infrastructure industries and networks would take more than 4 days even if they had robust technical indicators of compromise and insight (which the leak did not give them). The interesting part of the disclosure is not the OPSEC but in the precedence it sets which I’ll discuss in the next section.

The Compromises are an Act of War

Acts of war are governed under United Nations’ Article 2(4) where it discusses armed conflict. The unofficial rules regarding war in cyberspace are contained in the Tallinn Manual. In neither of these documents is the positioning of capabilities to do future damage considered an act of war. More importantly, in the NBC article it notes that the “cyber weapons” have not been deployed yet: “The cyber weapons would only be deployed in the unlikely event the U.S. was attacked in a significant way, officials say.” Therefore, what is being discussed is cyber operations that have gained access to Russian critical infrastructure networks but not positioned “weapons” to do damage yet. Intrusions into networks have never been seen as an act of war by any of the countries involved in such operations. So what’s interesting about this?  The claim by officials that the U.S. had compromised Russian critical infrastructure networks including the electric grid years ago.

For years U.S. intelligence officials have positioned that Russian, Chinese, Iranian, and at times North Korean government operators have been “probing” U.S. critical infrastructure such as the power grid. The pre-positioning of malware in the power grid has long been rumored and has been a key concern of senior officials. The acknowledgment in a possibly intended leak that the U.S. has been doing the same for years now is significant. It should come as no surprise to anyone in the information security community but as messaging from senior officials it does set a precedent internationally (albeit small given that this is a leak and not a direct statement from the government). Now, if capabilities or intrusions were found in the power grid by the U.S. government in a way that was made public the offending countries could claim they were only doing the same as the U.S. government. In my personal experience, there is credibility to claims that other countries have been compromising the power grid for years so I would argue against the “U.S. started it” claim that is sure to follow.  The assumption is that governments try to compromise the power grid ahead of time so that when needed they can damage it for military or political purposes. But the specific compromises that have occurred have not been communicated publicly by senior officials nor have they been done with attribution towards Russia or China. The only time a similar specific case was discussed with attribution was against Iran for compromising a small dam in New York and the action was heavily criticized by officials and met with a Department of Justice indictment.  Senior officials’ acknowledgment of U.S. cyber operations compromising foreign power grids for the purpose of carrying out attacks if needed is unique and a message likely heard loudly even if later denied. It would be difficult to state that the leak will embolden adversaries to do this type of activity if they weren’t already but it does in some ways make the operations more legitimate. Claiming responsibility for such compromises while indicting countries for doing the same definitely makes the U.S. look hypocritical regardless of how its rationalized.

Parting Thoughts

My overall thought is that this information was a controlled leak designed to help voters feel more confident in terms of both going to cast their ballots and in the overall outcome. Some level of deterrence was likely a side effect that the Administration sought. But no, this was not simply a stupid move nor was it bad OPSEC or an act of war. I also doubt it is simply a bluff. However, there is some precedent set and pre-positioning access to critical infrastructures around the world just became a little more legitimate.

One thing that struck me as new in the article though was the claim that the U.S. military used cyber attacks to turn out the lights temporarily in Baghdad during the 2003 Iraq invasion. When considering the officials interviewed for the story and the nature of the (again, possibly) controlled leak that is a new claim from senior government officials. There was an old rumor that Bush had that option on the table when invading Iraq but the rumor was the attack was cancelled for fear of the collateral damage of taking down a power grid. One can never be sure how long “temporary” might be when damaging such infrastructure. The claim in the article that the attack actually went forward would make that the first cyber attack on a power grid that led to outages – not the Ukrainian attack of 2015 (claims of a Brazilian outage years earlier were never proven and seem false from available information). However, the claim is counter to reports at the time that power outages did not occur during the initial hours of the invasion. Power outages were reported in Iraq but after the ending of active combat operations and looters were blamed. If a cyber attack in Iraq ever made sense militarily it would not have made as much sense after the initial invasion.

I’ve emailed the reporter of the story asking what the source of that claim was and I will update the blog if I get an answer. It is possible the officials stated this to the reporters but misspoke. In my time in the government it was not a rare event for senior officials to confuse details of operations or hear myths outside of the workplace and assume them to be true. Hopefully, I can find out more as that is a historically significant claim. Based on what is known currently I am skeptical that outages following the initial Iraq invasion in 2003 were due to a cyber attack.

A Collection of Resources for Getting Started in ICS/SCADA Cybersecurity

August 28, 2016

I commonly get asked by folks what approach they should take to get started in industrial control system (ICS) cybersecurity. Sometimes these individuals have backgrounds in control systems, sometimes they have backgrounds in security, and sometimes they are completely new to both. I have made this blog for the purpose of documenting my thoughts on some good resources out there to pass off to people interested; I will add to it over time as I find other resources I like. Do not attempt to do everything at once but it’s a good collection to refer back to in an effort to polish up skills or learn a new industry.  Rest assured, no matter how ill prepared you might feel in getting started realize that by having the passion to ask the question and start down the path you are already steps ahead of most. We need passionate people in the industry; everything else can be taught.

Optional Pre-Reqs

It’s always good to pick up a few skills regarding the fundamentals of computers, networks, and systems in general. I would recommend trying to pick up a scripting language as well; even if you don’t find yourself scripting a lot understanding how scripting works will add a lot of value to your skill set.

  • Learn Python the Hard Way
    • Learn Python the Hard Way is a great free online resource to teach you, step-by-step, the Python scripting languages. There’s a lot of different opinions about different scripting language. In truth, most of them have value in different situations so I’ll leave it to you to pick your own language (and I won’t tell you that you’re wrong for not learning Python, even though you are). Another good programming resource is Code Academy.
  • MIT Introduction to Computer Programming
    • MIT’s open courseware is a treasure for the community. It shocks me how many people do not take advantage of free college classes from top universities. This is the Introduction to Computer Science and Programming course. It should be taken at a slow pace but it’ll give you a lot of fundamental skills.
  • MIT Introduction to Electrical Engineering and Computer Science
    • Another MIT open course but this time focused on electrical engineering. This is a skill that will help you understand numerous types of control systems better as well as have a better grasp on how computers work.
  • Microsoft Virtual Academy
    • Microsoft Virtual Academy can be found at various locations on YouTube. I have linked to the first one; I would recommend browsing through the topic list for everything from fundamentals of networking, to fundamentals of computers, to how the Internet works.

Intro to Control Systems

Control systems run the world around us. Escalators, elevators, types of medical equipment, steering in our cars, and building automation systems are types of control systems you interact with daily. Industrial control systems (ICS) are industrial versions of control systems found in locations such as oil drilling, gas pipelines, power grids, water utilities, petrochemical facilities, and more. This section will go over some useful resources and videos to learn more about industrial control systems.

  • The PLC Professor
    • PLC Professor and his website plcprofessor.com contains a lot of great resources for learning what programmable logic controllers (PLCs) and other types of control systems and their logic are and how they work. Some resources are free while others are paid. At some point, getting a physical kit as a trainer to learn on is going to be a requirement.
  • Control System Basics
    • This is a great video explaining control system basics including the type of logic these systems use to sense and create physical changes to take action upon.
  • What is SCADA?
    • You’ve no doubt heard the term SCADA, if you haven’t you will. It stands for Supervisory Control and Data Acquisition and is a type of ICS. This video is a nice basic approach to explaining SCADA.
  • Department of Energy – Energy 101
    • The Department of Energy has a series of Energy 101 videos to explain basic concepts of different types of energy generation, sources, etc. It’s a fantastic series that should excite you about the field while explaining key terms and concepts.
  • Wastewater Treatment Explanation Video
    • We all need wastewater treatment facilities and learning about them helps you understand how control systems work and just how complex simple tasks in life can be (if we didn’t have control systems). These types of videos are important for you to watch and learn so that you get exposed to different industries. ICS is not really a community, it’s a collection of communities.
  • Waste Water – Flush to Finish
    • Another good wastewater explanation video.
  • Refinery Crude Oil Process
    • This is a video explaining a refinery crude oil process. If these types of videos don’t excite you to some extent you may be in the wrong career field. The world around us is magnificent and learning different industries will start to help you ask the right questions which will lead to your education on the subject.
  • Natural Gas Processing
    • This is an older video (the industry has definitely become more advanced than represented here) but extremely interesting on how natural gas is harvested, processed, and transferred. Think about all the control systems that have to go into this seeminly simple process.
  • How a Compressor Station Works
    • One particularly interesting (and historically difficult to secure) portion of the ICS community is the natural gas pipeline. This video talks about natural gas to some extent but really focused on compressor stations. Compressor stations as remote sites offer numerous opportunities and challenges to defenders. In short – they’re pretty cool.
  • Chemical Engineering YouTube Channel
    • A great series of videos explaining and showing different components of chemical processing.
  • Steel from Start to Finish
    • This is an example of how steel is made. The video, like the others in this section shows an important process that can help you understand all that goes into control system security. It’s important to know the real world impacts and applications of the processes we are trying to defend to fully understand how important safety and reliability are as the main component of industrial automation.  
  • Nuclear Reactor Explained
    • This is a simplistic but extremely easy to digest explanation and animation of a nuclear reactor. Nuclear energy has a bad rap due to pop culture but is a highly clean and safe form of energy. It’s really useful to understand this process and how these systems are designed and, idealy, isolated.
  • Nuclear Power Station
    • Building from the last video, here’s another video diving deeper into nuclear power. What you should focus on here is the design and engineering that go into the safety systems. Safety systems can be bypassed, there are no ‘unhackable’ things, but this helps you to understand just how these systems are designed to be safe by default even if not build with security in mind. The Fukushima event can be observed as a worst case and extremely unlikely scenario. Learning from it will be important; here you’ll find a good video on it.
  • Thermal Power Plant
    • There are many ways to generate power; this video explains thermal power and the complexity of the environment.
  • SCADA Utility 101
    • Rusty Williams has just the right type of southern speaking which makes an audience want to learn more. The guy is awesome, the video explains SCADA from an electric utility perspective, and this is a much watch.
  • Electric Generation and Transmission
    • Didn’t get enough of Rusty? Here’s another video of him explaining the generation and transmission of electricity.
  • Control Lectures
    • This is a fantastic series by Brian Douglas which covers a wide range of lectures on control systems in a very easy to process way.
  • Safety Systems
    • It’s good to get familiar with safety systems as well. Safety systems can either be active or passive. As an over simplification think of these as systems that take control of the system when an unsafe event occurs and helps to regulate it or shut it down safely. It can also be the product of good engineering instead of a dedicated system. Either way, there is a trend in the community to have integrated safety systems into one device; where the control device is also the safety device. This has cost savings but horrendous cyber security consequences and thus horrible safety consequences.
  • Safety Valves
    • Building on your understanding now of safety systems here’s an example of a safety valve in a process and how it can work to keep the operations, and more importantly the people around it, safe.
  • Industrial Disaster Explanation Videos
    • The U.S. Chemical Safety and Hazard Investigation Board has a number of videos explaining industrial disasters. This is an important resource to understand what can go wrong in industrial automation regardless of the cause (these are not cyber related but are important to understand as things that cyber could potentially cause if we are not careful). In IT, if things go wrong people do not generally die – in ICS death, injury, and environmental harm is a very real concern.

Intro to Computer and Network Security

There’s a lot of resources in the form of papers below (especially the SANS Reading Room) which are all great. However, you really need to get hands on so many of the resources are focused on tools and data sets. Try to read up as much as possible and then deeply dive into hands on learning.

  • The Sliding Scale of Cyber Security
    • I wrote this paper specifically to address the nebulous nature of “cyber security.” When people say they specialize in cyber security, what exactly does that mean? I put forth that there are 5 categories of investment that can be made. The prioritization for the value towards security should be towards the left hand side of the scale. It is ok to invest in multiple categories at once but understand the true return on investment you’re getting versus the cost.
  • VMWare
    • You’ll want to be able to set up Virtual machines (VMs) to get hands on with files and various security tools. VMWare is a great choice as is VirtualBox. VMWare has a free version you’ll want to use (Player). Don’t worry about getting Workstation or Player Pro until later when you are more experienced and want to save snapshots (copies of your VM to revert back to). Below you’ll find a sample video on VMs, feel free to Google around for better understanding.
  • Security Onion
    • You’re going to want to get hands on with the files presented in this guide; Security Onion is an amazing collection of free tools to do just that with a focus on network security monitoring and traffic analysis.
  • SANS’ SIFT
    • If you’re super cool you’ll want to get into forensics at some point; the SIFT VM from SANS is a collection of tools you’ll need to get started.
  • REMnux
    • Before you try out reverse engineering malware (REM) you’ll want to have a safe working environment to do so. This is not a beginner topic but at some point you’ll likely want to examine malware, Lenny’s REMnux VM is the safe place to do that.
  • Malware Traffic Analysis
    • Brad’s blog on malware traffic analysis is one of the best resources in the community. It combines sample files with his walk throughs of what they are and how to deal with them. You can learn a lot this way very quickly.
  • Open Security Training
    • This website is dedicated to open (free) security training. There are a number of qualified professionals who have dedicated time to teach things from the basics of security to advanced reverse engineering concept. You could spend quite a time on this website’s courses and all of them would make you more capable in this field. There are often full virtual machines (VMs), slides, and videos for the courses.
  • Sample PCAPs from NETRESEC
    • These packet capture samples are invaluable to learning how our systems interact on the network. Take a tool like Wireshark and analyze these files to get familiar with them and the practice (Wireshark will continually be your friend in any field you specialize in).
  • DEFCON Capture the Flag Files
    • DEFCON has made available their files (and often times walkthroughs) for their capture the flag contests. These range from beginner to advanced concepts in offensive security practices such as red teaming. Learning how to break into systems and how they fail is great for defense. It’s not required but it can be helpful.
  • Iron Geek
    • This is an invaluable collection of videos from conferences around the community. If you’re looking for a specific topic it’s a good idea to search these conference videos. Felt like you missed out on the last decade of security? Don’t worry most of it’s captured here.
  • SANS Reading Room
    • The SANS Institute is the largest and most trusted source of cyber security training. Their Reading Room is a free collection of papers written by students and instructors covering almost every topic in security.
  • Krebs on Security
    • Krebs puts together a great blog doing quality investigative research on breaches, incidents, and cyber security topics that are newsworthy. While doing your self-education keep an eye out for breaking and exciting stories.
  • Honeynet Project
    • Consider this a capstone exercise. Read up on honeypots and learn to deploy a honeypot such as Conpot. The idea is that to run a honeypot correctly you’ll have to learn about safeguarding your own infrastructure, setting up proxies and secure tunnels, managing cloud based infrastructure such as an EC2 server, performing traffic analysis on activity in the honeypot, malware analysis on discovered capabilities, and eventually incident response and digital forensics off of the data provided to explore the impact to the system. Working up to this point and then running a successful honeypot for any decent length of time really helps develop and test out a wide range of skills in the Architecture, Passive Defense, Active Defense, and (potentially in the form of Threat Intel) Intelligence categories of the Sliding Scale of Cyber Security.

Intro to Control System Cyber Security

Cybersecurity is not a new topic but in ICS it is mostly unexplored. The hardest part for most folks is learning who to listen to and what resources to read. There are a lot of “experts” out there who will quickly lead you astray; look at people’s resumes to see if they had the opportunity to do what they are speaking to you about. Because they don’t have experience doesn’t mean they are necessarily wrong but it’s an easy check. As an example, if someone calls themselves a “SCADA Security Guru” or something like a “thought leader” but they’ve only ever been a Chief Marketing Officer of an IT company, that should be a red flag. It is important to be very critical of information in this space but continually push forward to try to make the community better. Below are some trusted resources to help you on your journey.

  • An Abbreviated History of Automation and ICS Cybersecurity
    • This is a great SANS paper looking at the background on ICS cybersecurity. Well worth the read to make sure you understand many of the events that have occurred over the past twenty years and how they’ve inspired security in ICS today.
  • SANS ICS Library
    • This is the SANS ICS library which contains a number of posters and papers to get you started. Reference the blog as well for good explorations of topics. I write the Defense Use Case series as well which explores real and hyped up ICS attacks and lessons learned from them.
  • SCADAHacker Library
    • Joel has a fantastic collection of papers on ICS security, standards, protocols, systems, etc. Lots of valuable content in this collection.
  • The ICS Cyber Kill Chain
    • The attacks we are concerned most with on ICS take a different approach than traditional IT. This is a paper I wrote with Michael Assante exploring this and detailing the steps an adversary needs to take to accomplish their goals.
  • Analyzing Stuxnet (Windows Portion)
    • This is Bruce Dang’s talk at the 27th CCC in Germany on his exploration of analyzing Stuxnet. He was at Microsoft and was one of the first researchers to analyze it. This is a good understanding of the Windows portion of analysis. I show this video even though it’s a bit more advanced to highlight that there are often an IT and (operations technology) OT side of analysis.
  • Analyzing Stuxnet (ICS Portion)
    • Ralph Langer was responsible for deep diving into Stuxnet on it’s ICS payload portion. This talk gives a good understanding of the OT side of the analysis.   
  • To Kill a Centrifuge – Stuxnet Analysis
    • This is Ralph Langer’s excellent paper exploring the technical details on the Stuxnet malware and most importantly the ICS specific payload and impact. It is a good idea to read through the paper and Google the terms in the paper you do not understand.
  • SANS ICS Defense Use Case #5 – Ukraine Power Grid Attack
    • This is a paper I wrote with Michael Assante, and Tim Conway released through the E-ISAC on our analysis of the Ukraine power grid attack in 2015. There are also recommendations for defense at each level of the ICS kill chain (applying 1 control is never enough to stop attacks).
  • Perfect ICS Storm
    • Glenn wrote a great paper looking at the interconnectivity of ICS and the networks around them with considerations on how it impacts monitoring and viewing the control systems.
  • Network Security Monitoring in ICS 101
    • Here is a great intro talk on network security monitoring in an ICS by Chris Sistrunk at DEFCON 23. Network security monitoring is exceptionally useful in ICS because it can be done with minimal data sets and passively which works inside the confines of the safety and reliability requirements of an ICS network.
  • Achieving Network Security Monitoring Visibility with Flow Data
    • A SANS webcast with myself and Chris Sander exploring ICS network security monitoring and showing off his tool FlowBAT.
  • S4 Videos
    • The S4 conference run by Dale Peterson is a great community resource. He has posted a number of the conference presentations which will give you a great look at the ICS security community especially from the researcher perspective.
  • Defense Will Win
    • Dale Peterson’s excellent S4 talk that has an upbeat attitude of “defense will win.” This is something I completely agree with and for a few years now I have been championing the phrase “Defense is Doable” to help folks not get down when it comes to ICS cyber security. It may seem like the hardest challenge out there but it’s worthwhile and these are the most defensible environments on the planet; maybe not the most defended – but we will get there.  
  • The ICS Cyber Security Challenge
    • This is an annual challenge I put on sponsored by SANS which gives you access to questions and data sets for helping you progress your ICS cyber security skill set.

Recommended ICS Cybersecurity Books

  • Rise of the Machines: A Cybernetic History
    • It seems a bit odd to put a non-technical book as my first recommendation but I assure you it is with reason. Dr. Thomas Rid wrote this book to attempt to fully understand the history, implications, and usages of the word “cyber”. Delightfully, control systems have a major role throughout the book. It was control systems that got us started with “cybernetics” which is eventually where we would have the “cyber” word that fills our daily lives.
  • Handbook of SCADA/Control Systems Security
    • Robert (Bob) Radvanovsky and Jacob (Jake) Brodsky put together this wonderful collection of articles from people throughout the community. It covers a wide variety of topics from a wide variety of personalities and professionals.
  • Protecting Industrial Control Systems from Electronic Threats
    • Joe Weiss is a polarizing individual in the community but only because of how passionately he cares about the industry and how long he’s been in the community. Many of us here today in the community owe much to Joe. The scars he carries are from forging a path that has made ICS security much more mainstream.
  • Industrial Network Security
    • Eric Knapp and Joel Langill wrote this book looking specifically at the network security side of ICS. It’s a fantastic resource exploring different technologies and protocols by two professionals I’m glad to call peers and friends.
  • Hacking Exposed: Industrial Control Systems
    • This book takes a penetration testing focus on ICS and talks about how to test and assess these systems from the cybersecurity angle while doing it safely and within bounds of acceptable use inside of an ICS. It’s written by Clint Bodungen, Bryan Singer, Aaron Shbeeb, Kyle Wilhoit, and Stephen Hilt who all are trusted professionals in the industry.

Recommended Professional Training

You in no way need certifications or professional training to become great in this field. However, sometimes both can help either for job opportunities, getting a raise, or polishing up some skills you’ve developed. I highly encourage you to learn as much as you can before getting into a professional class (the more you know going in the more you’ll take away) and I encourage you to try to find an employer to pay your way (they aren’t cheap). If your employer doesn’t have a training policy it’s a good time to try and find a new employer. Here are two professional classes I like for ICS cyber security training (I’m biased because I teach at SANS but I teach there because I believe in what they provide).

  • SANS ICS 410 – ICS/SCADA Essentials
    • This class is designed to be a bridge course; if you are an ICS person who wants to learn security, or a security person who wants to learn ICS, this course offers the bridge between those two career fields and offers you an introduction into ICS cyber security.
  • SANS ICS 515 – ICS/SCADA Active Defense and Incident Response
    • This is the class I authored at SANS teaching folks about targeted threats (such as nation-state adversaries or well funded crime groups) that impact ICS and how to hunt them in your environment and respond to incidents.
  • CYBATI 
    • Matt Luallen runs the CYBATI class. It’s a hands on class that’s been tried and tested and is popular around the community. He sometimes teaches it at SANS events and also teaches at other events. Matt was one of the first people I met in the ICS security community and has been like a brother to me over the years; he’s a fantastic resource for the community and more importantly he’s just a really good person. Learning from him (and getting to use his CYBATIworks kit which is a really useful training kit for sale) is something everyone should get to do at some point in their career.

Recommended Conferences

No matter how much time you spend reading or practicing eventually you need to become part of the community. Contributions in the form of research, writing, and tools are always appreciated. Contributions in the form of conference presentations are especially helpful as they introduce you to other interested folks. The ICS cybersecurity community is an important one on many levels. It’s one of the best communities out there with hard working and passionate people who care about making the world a safer place. Below are what I consider the big 5. These conferences are the ones that are general ICS cyber security (not a specific industry such as API for oil and gas or GridSecCon for electric sector) although those are valuable as well.

  • SANS ICS Security Summit
    • For over a decade the SANS ICS Security Summit has been a leading conference on bringing together researchers, industry professionals, and government audiences. The page above links to the various SANS ICS events but look for the one that says “ICS Security Summit” each year. It is usually held in March at Disney World in Orlando Florida. It’s strong suit is the educational and training aspects not only because of the classes but also because of the strong industry focus.
  • DigitalBond’s S4
    • The S4 conference is a powerhouse of leading ICS security research. Dale puts on a fantastic conference every year (now with a European and Japanese venue as well each year) that brings together some of the most cutting edge research and ideas. S4 in the US is often held in January in Florida.
  • The ICS Cyber Security Conference (WeissCon)
    • Affectionately known as WeissCon after it’s founder Joe Weiss, the conference is now owned and operated by SecurityWeek and usually runs in October at different locations each year in the US (Georgia is usually a central location for the conference though). The conference brings together a portion of the community not often found at the other locations and has a strong buy-in from the government community as well as the vendor community.
  • The ICS Joint Working Group (ICSJWG)
    • The ICSJWG is a free conference held twice a year by the Department of Homeland Security. I often encourage people to go to the ICSJWG conference first as a type of intro into the community, to then go to the SANS ICS Security Summit for more view into the asset owner community and to get training, then go to S4 for the latest research, to go to WeissCon to see some of the portions of the community and vendor audience not represented elsewhere, and finally to 4SICS to get an international view. It is perfectly ok to go to all five of the big conferences a year (I do) but if you need a general path that is the one I would follow initially.
  • 4SICS
    • The 4SICS is held every year in Stockholm, Sweden usually in October and is a fantastic collection of ICS professionals from around Europe. The conference usually attracts the same type of research and big named audience that you would find at S4 but with deep roots in Europe as represented by its founders Erik and Robert. They are two of the friendliest people in the ICS community and have a wealth of experience from decades of experience defending infrastructure. Stockholm is cold in the winter but the people and their optimism will keep you warm.

This is just a small collection of a lot of the fantastic resources out there. I will continually try to update it as especially good materials are made available. Always fight to be part of the community and interact – that is where the real value in learning is. Never wait to have someone show you though, even the “experts” are usually only expert in a few things. It is up to you to teach yourself and involve yourself. We as a community are waiting open armed.

Intelligence Defined and its Impact on Cyber Threat Intelligence

August 25, 2016

Michael Cloppert wrote a great piece to argue for a new definition of cyber threat intelligence. The blog is extremely well written (I personally love the academic style and citations) and puts forth a good discussion on operations. Sergio Caltagirone published a rebuttal equally valuable where he agreed with Mike that there is accuracy missing from current cyber threat intelligence definitions but noted that Mike focused too much on operations. The purpose of this blog is not to rebut their findings but to add to the conversation. In many aspects I agree with both Mike and Sergio; I would highlight that the forms of intelligence discussed though are very policy focused (sometimes even military focused) and influence how we define cyber threat intelligence. I do not envision that between these three blogs we’ve settled a long standing debate on intelligence but the intent is to add to the discussion and encourage thoughts by others.

In Mike’s piece the definition he presented for the field of cyber threat intelligence is the “union of cyber threat intelligence operations and analysis” each of which he previously defined. Sergio responded by stating “Intelligence doesn’t serve operations, intelligence serves decision-making which in turn drives operations to achieve policy outcomes.” I agree with this understanding of intelligence to meet policy needs and while Sergio intentionally does not intend to cover all aspects of intelligence outside of policy I believe it is important to consider. Mike teased out at one point that “…’intelligence’ more broadly is a bias toward a particular type of intelligence, and they continue to overwhelmingly focus on geopolitical outcomes.” He gives an example of business intelligence as another form of intelligence and accepts that the basis of intelligence is interpreted information with an assessment to advance an interest. This is where he stops though in an effort to stay focused on defining cyber threat intelligence. This is where I would like to begin.

Dr. Michael S. Goodman, a professor of intelligence studies at Kings College in London, wrote a piece for the CIA’s Center for the Study of Intelligence where he discussed the challenges and benefits in studying and teaching intelligence. He specifically noted that “The academic study of intelligence is a new phenomenon” although the field of intelligence itself is very old. More relevantly to this blog post he wrote that “Producing an exact definition of intelligence is a much-debated topic.” In a non-government intelligence focused piece the University of Oregon has a page dedicated to the theories and definitions of intelligence. There, they cite psychologists and educators Howard Gardner, David Perkins, and Robert Sternberg to assign attributes to intelligence and state that it is a combination of the ability to:

  • Learn
  • Pose Problems
  • Solve Problems

These three attributes are core to any definition of intelligence whether it’s business intelligence, emotional intelligence, or military intelligence. Additionally, the distinctly human component of this process, for those of you considering artificial intelligence as you read this, is harder to capture but likely exists in the ability to pose and solve problems. Machines can pose and solve problems to an extent but how they do that sets them apart from humans. More to the point, how each of us pose and solve problems is influenced at some level by bias. That bias is often an influence analysts seek to minimize so that it does not jade how we analyze problems and the answers we derive. However, that bias in how we pose and solve problems is likely the only distinctly human component of intelligence. That is a discussion for a longer future piece though.

Further in the University of Oregon piece, different types of intelligences are listed from Gardner, Perkins, and Sternbeg. A few are listed below:

  • Linguistic
  • Intrapersonal
  • Spatial
  • Practical
  • Experiential
  • Neural
  • Reflective

These different types of intelligence are not all encompassing and focus on the psychological more than classic government intelligence. However, they offer a more robust view into what it means to be able to process and analyze information which is in of itself core to cyber threat intelligence. I gravitate more towards Robert Sternberg’s understanding of intelligence and specifically his view of experiential and componential intelligence. According to his 1988 and 1997 writings on intelligence experiential intelligence is “the ability to deal with novel situations; the ability to effectively automate ways of dealing with novel situations so they are easily handled in the future; the ability to think in novel ways.” His understanding of componential intelligence is “the ability to process information effectively. This includes metacognitive, executive, performance, and knowledge-acquisition components that help to steer cognitive processes.”

I enjoy these two the most because they seem to map the closest to the idea of intelligence generation and intelligence consumption. In the field of cyber threat intelligence we often hear vendors, security researchers, and companies talk about “threat intel” and standing up teams to do intel-y things but without specific guidance. There is a stark difference in generating intelligence and in consuming it. Most companies are looking for threat intelligence consumption teams (those that can map their organization’s requirements and search for what is available to help drive defense) not threat intelligence generation teams (those individuals who analyze adversary information to extract knowledge which may or may not be immediately useful). A good team is usually the mix of both but with a clear understanding of which one is the priority and which effort is the goal at any given time. Sternberg’s experiential intelligence speaks more to threat intelligence generation whereas his componential intelligence addresses the ability to process, or consume, intelligence. The definitions are not as simple as this but it is thought provoking.

In reviewing Mike and Sergio’s excellent blog posts with the addition of a wider view on intelligence both from a classical, psychological, and philosophical aspect there are attributes that emerge. These attributes mean that intelligence:

  • Must be analyzed information
    • To perform analysis is a distinctly human trait likely due to our influence of bias and our efforts to minimize it (i.e. no $Vendor your tool does not create intelligence) meaning that it is always up to our interpretation and others may have other valuable and even competing interpretations
  • Must meet a requirement
    • Requirements can be wide ranging such as policy, military operations, geo-political, business, friendly forces movements and tactics, or self-awareness; the lack of a requirement would result in intelligence not being useful and by that extension be an inhibitor to intelligence (i.e. overloading analysts with indicators of compromise is not intelligence)
  • Must respect various forms
    • There is no one definition of intelligence but each definition must allow for different ways of interpreting, processing, and using the intelligence

To further qualify to be threat intelligence the presented intelligence must be about threats; threats are not only geo-political in nature but also may encompass insiders. However, I disagree with the notion that there is an unwitting insider threat because the definition of threat I subscribe to must have the following three attributes:

  • Opportunity
    • There must be the ability to do harm. In many organizations this means knowing your systems, people, vulnerabilities, etc.
  • Intent
    • There must be an intention to do harm, if it is unintentional the harm is still as impactful but it cannot be properly classified as a threat. Understanding adversary intention is difficult but this is where analysis comes in understanding the threat landscape
  • Capability
    • The adversary must have some capability to do you harm. This may be malware, it may be PowerShell left running in your environment, and it could be non-technical such as the means to influence public perception through leaked documents

Therefore, I use the following definition, heavily inspired by classic definitions, for intelligence: “The process and product resulting from the interpretation of raw data into information that meets a requirement.” The product may be knowledge, it may be a report, it could be tradecraft of an adversary, etc. Further, I use the following definition for cyber threat intelligence “The process and product resulting from the interpretation of raw data into information that meets a requirement as it relates to the adversaries that have the intent, opportunity and capability to do harm.” (Note that in this definition of cyber threat intelligence the adversary is distinctly human. Malware isn’t the threat; the human or organization of humans intending you harm is the threat.) Each definition is concise but open-ended enough to serve multiple purposes beyond military intelligence.

I in no way think that this solves any aspect of this debate. And I do not feel that my definitions actually conflict with what Mike and Sergio have put forward but are instead meant simply as an extension of the topic. Mike and Sergio are both extremely competent individuals that I am privileged to call my friends, peers, and over numerous occasions mentors. However, their blogs inspired me to explore the topic for myself and this blog was simply my way to share my opining on my findings. I hope it has been useful in some manner to your own exploration.