Browsing Tag

cyber attack

Claims of a Cyber Attack on Iran’s Abadan Oil Refinery and the Need for Root Cause Analysis

October 20, 2019

This blog was originally posted by me here.

__________________________________________________

 

On October 20th, 2019 the Twitter account @BabakTaghvaee posted that there was a fire at the Abadan Oil Refinery in Iran; notably the account claimed that the fire was a result of a confirmed cyber attack. A video was posted of the fire and the news organization Retuers had posted just prior to the tweet about the fire as well. The Reuters reporting cited Iranian state broadcaster IRIB to say that the fire was in a canal carrying waste from the oil refinery and was at that time under control. Various posts on social media took advantage of the claim to spread the information about the cyber attack and claim that it was “probably” a result of the alleged Iranian attacks on Saudi Aramco. A few commentators linked to the Reuters story on a secret cyber attack was carried out by the U.S. on Iran published on October 16th as proof and fell victim to the classic Post Hoc Propter Hoc fallacy of assuming correlation equals causation.

The purpose of this blog is to add some context to such events for the purpose of avoiding hype but to clearly point out a gap in the industrial cybersecurity community that we have around root cause analysis and the importance of setting forth a strategy across collection, visibility, and detection to ever get to the point where response scenarios can account for such processes.

Cyber attacks can absolutely have the capability to cause devastating effects. Adversaries have become more aggressive over the last few years in this space and are demonstrating an increase in knowledge and sophistication with regards to causing physical effects through cyber intrusions and capabilities. In 2017, the TRISIS malware leveraged by XENOTIME was responsible for a shut down of a Saudi Arabian’ petrochemical company where the adversary failed in their likely actual intent to kill people at the facility by targeting safety systems. In that case though, one of the interesting details is that the adversary tried multiple times to achieve their effect. The first time TRISIS was deployed it failed, the plant shut down, and the personnel involved attempted to do root cause analysis. Root cause analysis is well understood and practiced in the engineering and operations communities. However, those practices rarely fully consider a cyber component.

In the TRISIS case, the plant engineers could not determine what went wrong, i.e. they did not identify the cyber attack during or after the event and went back into operations giving the adversary another opportunity. It is not that the cyber attack was undetectable, it was perfectly detectable through a variety of detection approaches in the industrial networks, but the defenders at that site were not performing industrial specific cyber detection. Because of the lack of detection capabilities as well as the collection capabilities feeding into them some of the evidence was not available after the attack to properly get to root cause analysis of the event and what evidence was available was easy to miss. This is like trying to photograph the getaway car of a robbery after the car is already gone; you can still find other evidence such as tire tracks, but it would have been nice to have the photo of the license plate. Often times there are forensic practices that can take place after that attack even without good detection capabilities, but they can be easy to miss if not prepared for properly in the incident response procedures or highlighted through threat detection and intrusion analysis.

In the Abadan case it is unlikely from what we know of such incidents and normal engineering practices around root cause analysis that the personnel on site have had any opportunity at all to properly do root cause analysis. Refinery fires are not rare, but they are serious events that the engineering and operations community usually handle maturely with safety as the number one priority. While personnel are still trying to get the fire under control it is very unlikely that anyone is performing root cause analysis of the event to include a cyber component. Proper root cause analysis including cyber forensics is one of the most difficult tasks to achieve in industrial control systems (ICS) networks. The ICS cybersecurity community is maturing rapidly but still very far from being able to perform this level of a task reliably.

It is my estimates that only a small subset of the community is gaining visibility into the ICS networks today though the progress we are seeing is encouraging and a hallmark of increasing maturity. A smaller subset of that community though is pursuing a collection and detection strategy factored in to the products, process, and training they implement. A much smaller subset is tying this into what types of events they want to be able to respond to and gain root cause analysis. Even if Abadan’s oil refinery was world leading in this regard it is unlikely enough time has passed for anyone to properly analyze the information collected. For this reason, I would assess that any claims of a cyber attack are immature at this point and unlikely to be founded in proper evidence. Should cyber be considered though? Absolutely, especially with the increasing tension and demonstration of adversaries. But today the larger industry lives closer to Schrödinger’s ICS than we do to organizations’ reliably achieving root cause analysis.

It is my recommendation to the ICS cybersecurity community that events like this be used to highlight the gaps we have in our current defenses. We should not hype up such events but instead look inward and determine if we could answer similar questions of “was it a cyber attack?” in our own industrial and operations networks. I often recommend to organizations to start with a few scenarios that you want to be able to respond to taken both from intelligence-driven scenarios as well as consequence-driven scenarios. From those determine what types of requirements, such as root cause analysis, reliability, and safety will be important to the organization and its stakeholders. Develop incident response plans from those events and work backwards to define the type of detection that you’ll need to get to that incident response and the type of collection you’ll need to get to that detection. That will help define your visibility requirements. Instead of starting with visibility and working forward, potentially never getting to the results you need, start with the end in mind and work backwards to ensure the visibility requirements are aligned.

For more information on these topics I would recommend Dragos’ Collection Management Framework, the Four Types of Threat Detection, and Consequence-Driven ICS Cybersecurity papers as well as the Year In Review reports which should help you on your path to think about the challenges ahead and operate more safe and reliable infrastructure.

Analytical Leaps and Wild Speculation in Recent Reports of Industrial Cyber Attacks

December 31, 2016

“Judgement is what analysts use to fill gaps in their knowledge. It entails going beyond the available information and is the principal means of coping with uncertainty. It always involves an analytical leap, from the known into the uncertain.”

– Chapter 4, Psychology of Intelligence Analysis, Richards J. Heuer.

 

Analytical leaps, as Richards J. Heuer said in his must-read book Psychology of Intelligence Analysis, are part of the process for analysts. Sometimes though these analytical leaps can be dangerous, especially when they are biased, misinformed, presented in a misleading way, or otherwise just not made using sound analytical processes. Analytical leaps should be backed by evidence or at a minimum should include evidence leading up to the analytical leap. Unfortunately, as multiple analytical leaps are made in series it can lead to entirely wrong conclusions and wild speculation. There have been three interesting stories relating to industrial attacks this December as we try to close out 2016 that are worth exploring in this topic. It is my hope that looking at these three cases will help everyone be a bit more critical of information before alarmism sets in.

The three cases that will be explored are:

  • IBM Managed Services’ claim of “Attacks Targeting Industrial Control Systems (ICS) Up 110%”
  • CyberX’s claim that “New Killdisk Malware Brings Ransomware Into Industrial Domain”
  • The Washington Post’s claim that “Russian Operation Hacked a Vermont Utility, Showing Risk to U.S. Electrical Grid Security, officials say”

 

“Attacks Targeting Industrial Control Systems (ICS) Up 110%”

I’m always skeptical of metrics that have no immediately present quantification. As an example, the IBM Managed Security Services posted an article stating that “attacks targeting industrial control systems increased over 110 percent in 2016 over last year’s numbers as of Nov. 30.” But there is no data in the article to quantify what that means. Is 110% increase an increase from 10 attacks to 21 attacks? Or is it 100 attacks increased to 210 attacks?

The only way to understand what that percentage means is to leave this report and go download the IBM report from last year and read through it (never make your reader jump through extra hoops to get information that is your headline). In their 2015 report IBM states that there were around 1,300 attacks in 2015 (Figure 1). This would mean that in 2016 IBM is reporting they saw around 2,700 ICS attacks.

figure1

Figure 1: Figure from IBM’s 2015 Report on ICS Attacks

 

However, there are a few questions that linger. First, this is a considerable jump from what they were tracking previously and from their 2014 metrics. IBM states that the “spike in ICS traffic was related to SCADA brute-force attacks, which use automation to guess default or weak passwords.” This is an analytical leap that they make based on what they’ve observed. But, it would be nice to know if anything else has changed as well. Did they bring up more sensors, have more customers, increase staffing, etc. as the stated reason for the increase would not alone be responsible.

Second, how is IBM defining an attack. Attacks in industrial contexts have very specific meaning – an attempt to brute-force a password simply wouldn’t qualify. They also note that a pentesting tool on GitHub was released in Jan 2016 that could be used against the ICS protocol Modbus. IBM states that the increase in metrics was likely related to this tools’ release. It’s speculation though as they do not give any evidence to support their claim. However, it leads to my next point.

Third, is this customer data or is this honeypot data? If it’s customer data is it from the ICS or simply the business networks of industrial companies? And if it’s honeypot data it would be good to separate that data out as it’s often been abused to misreport “SCADA attack” metrics. From looking at the discussion of brute-force logins and a pentesting tool for a serial protocol released on GitHub, my speculation is that this is referring mostly to honeypot data. Honeypots can be useful but must be used in specific ways when discussing industrial environments and should not be lumped into “attack” data from customer networks.

The article also makes another analytical leap when it states “The U.S. was also the largest target of ICS-based attacks in 2016, primarily because, once again, it has a larger ICS presence than any other country at this time.” The leap does not seem informed by anything other than the hypothesis that the US has more ICS. Also, again there is no quantification. As an example, where is this claim coming from, how much larger is the ICS presence than other countries, and are the quantity of attacks proportional to the US ICS footprint when compared to other nations’ quantity of industrial systems? I would again speculate that what they are observing has far more to do with where they are collecting data (how many sensors do they have in the US compared to China as an example).

In closing out the article IBM cites three “notable recent ICS attacks.” The three case studies chosen were the SFG malware that targeted an energy company, the New York dam, and the Ukrainian power outage. While the Ukrainian power outage is good to highlight (although they don’t actually highlight the ICS portion of the attack), the other two cases are poor choices. As an example, the SFG malware targeting an energy company is something that was already debunked publicly and would have been easy to find prior to creating this article. The New York dam was also something that was largely hyped by media and was publicly downplayed as well. More worrisome is that the way IBM framed the New York dam “attack” is incorrect. They state: “attackers compromised the dam’s command and control system in 2013 using a cellular modem.” Except, it wasn’t the dam’s command and control system it was a single read-only human machine interface (HMI) watching the water level of the dam. The dam had a manual control system (i.e. you had to crank it to open it).

Or more simply put: the IBM team is likely doing great work and likely has people who understand ICS…you just wouldn’t get that impression from reading this article. The information is largely inaccurate, there is no quantification to their numbers, and their analytical leaps are unsupported with some obvious lingering questions as to the source of the data.

 

“New Killdisk Malware Brings Ransomware Into Industrial Domain”

CyberX released a blog noting that they have “uncovered new evidence that the KillDisk disk-wiping malware previously used in the cyberattacks against the Ukrainian power grid has now evolved into ransomware.” This is a cool find by the CyberX team but they don’t release digital hashes or any technical details that could be used to help validate the find. However, the find isn’t actually new (I’m a bit confused as to why CyberX states they uncovered this new evidence when they cite in their blog an ESET article with the same discovery from weeks earlier. I imagine they found an additional strain but they don’t clarify that). ESET had disclosed the new variant of KillDisk being used by a group they are calling the TeleBots gang and noted they found it being used against financial networks in Ukraine. So, where’s the industrial link? Well, there is none.

CyberX’s blog never details how they are making the analytical leap from “KillDisk now has a ransomware functionality” to “and it’s targeting industrial sites.” Instead, it appears the entire basis for their hypothesis is that Sandworm previously used KillDisk in the Ukraine ICS attack in 2015. While this is true, the Sandworm team has never just targeted one industry. iSight and others have long reported that the Sandworm team has targeted telecoms, financial networks, NATO sites, military personnel, and other non-industrial related targets. But it’s also not known for sure that this is still the Sandworm team. The CyberX blog does not state how they are linking Sandworm’s attacks on Ukraine to the TeleBots usage of ransomware. Instead they just cite ESET’s assessment that the teams are linked. But ESET even stated they aren’t sure and it’s just an assessment based off of observed similarities.

Or more simply put: CyberX put out a blog saying they uncovered new evidence that KillDisk had evolved into ransomware although they cite ESET’s discovery of this new evidence from weeks prior with no other evidence presented. They then make the claim that the TeleBots gang, the one using the ransomware, evolved from Sandworm but they offer no evidence and instead again just cite ESET’s assessment. They offer absolutely no evidence that this ransomware Killdisk variant has targeted any industrial sites. The logic seems to be “Sandworm did Ukraine, KillDisk was in Ukraine, Sandworm is TeleBots gang, TeleBots modified Killdisk to be ransomware, therefore they are going to target industrial sites.” When doing analysis always be aware of Occam’s razor and do not make too many assumptions to try to force a hypothesis to be true. There could be evidence of ransomware targeting industrial sites, it does make sense that they would eventually. But no evidence is offered in this article and both the title and thesis of the blog are completely unfounded as presented.

 

“Russian Operation Hacked a Vermont Utility, Showing Risk to U.S. Electrical Grid Security, officials say”

This story is more interesting than the others but too early to really know much. The only thing known at this point is that the media is already overreacting. The Washington Post put out an article on a Vermont utility getting hacked by a Russian operation with calls from the Vermont Governor condemning Vladimir Putin for attempting to hack the grid. Eric Geller pointed out that the first headline the Post ran with was  “Russian hackers penetrated U.S. electricity grid through utility in Vermont, officials say” but they changed to “Russian operation hacked a Vermont utility, showing risk to U.S. electrical grid, officials say.” We don’t know exactly why it was changed but it may have been due to the Post overreacting when they heard the Vermont utility found malware on a laptop and simply assumed it was related to the electric grid. Except, as the Vermont (Burlington) utility pointed out the laptop was not connected to the organization’s grid systems.

Electric and other industrial facilities have plenty of business and corporate network systems that are often not connected to the ICS network at all. It’s not good for them to get infected, and they aren’t always disconnected, but it’s not worth alarming anyone over without additional evidence.  However, the bigger analytical leap being made is that this is related to Russian operations.

The utility notes that they took the DHS/FBI GRIZZLY STEPPE report indicators and found a piece of malware on the laptop. We do not know yet if this is a false positive but even if it is not there is no evidence yet to say that this has anything to do with Russia. As I pointed out in a previous blog, the GRIZZLY STEPPE report is riddled with errors and the indicators put out were very non-descriptive data points. The one YARA rule they put out, which the utility may have used, was related to a piece of malware that is publicly downloadable meaning anyone could use it. Unfortunately, after the story ran with its hyped-up headlines Senator Patrick Leahy released a statement condemning the “attempt to penetrate the electric grid” as a state-sponsored hack by Russia. As Dimitri Alperovitch, CTO of CrowdStrike who responded to the Russian hack of the DNC, pointed out “No one should be making attribution conclusions purely from the indicators in the USCERT report. It was all a jumbled mess.”

Or more simply put: a Vermont utility acted appropriately and ran indicators of compromise from the GRIZZLY STEPPE report as the DHS/FBI instructed the community to do. This led to them finding a match to the indicator on a laptop separated from the grid systems but it’s not yet been confirmed that malware was present. The Vermont Governor Peter Shumlin then publicly chastised Vladimir Putin and Russia for trying to hack the electric grid. U.S. officials then inappropriately gave additional information and commentary to the Washington Post about an ongoing investigation which lead them to run with the headline that the this was a Russian operation. After all, the indicators supposedly were related to Russia because the DHS and FBI said so – and supposedly that’s good enough. Unfortunately, this also led a U.S. Senator to come out and condemn Russia for state-sponsored hacking of the utility.

Closing Thoughts

There are absolutely threats to industrial environments including ICS/SCADA networks. It does make sense that ICS breaches and attacks would be on the rise especially as these systems become more interconnected. It also makes perfect sense that ransomware will be used in industrial environments just like any other environment that has computer systems. And yes, the attribution to Russia compromising the DNC is very solid based on private sector data with government validation. But, to make claims about attacks and attempt to quantify it you actually have to present real data and where that data is coming from and how it was collected. To make claims of new ransomware targeting industrial networks you have to actually provide evidence not simply make a series of analytical leaps. To start making claims of attribution to a state such as Russia just because some poorly constructed indicators alerted on a single laptop is dangerous.

Or more simply put: be careful of analytical leaps especially when they are made without presenting any evidence leading into them. Hypotheses and analysis requires evidence else it is simply speculation. We have enough speculation already in the industrial industry and more will only lead to increasingly dangerous or embarrassing scenarios such as a US governor and senator condemning Russia for hacking the electric grid and scaring the public in the process when we simply do not have many facts about the situation yet.

New Suspected Cyber Attack on Ukraine Power Grid – Advice as Information Emerges

December 19, 2016

Reporting in Ukraine has emerged indicating another suspected cyber attack on the electric grid (the first being the confirmed one in 2015). Initial reporting is often inaccurate or a small view of incidents but it’s worth cautiously watching and seeing what information emerges. Here’s what we know so far:

Reports of Suspected Cyber Attack:
Around noon of December 19th, 2016 reports began to surface related to a possible cyber attack on the Ukraine electric grid. The attack is suspected to have taken place near midnight local Ukraine time on the 17th. The Pivnichna transmission-level substations have been called out as possibly being the site attacked.  This is of course concerning for numerous reasons including the cyber attack on the Ukraine grid in December 2015 as well as traditional ongoing military actions in Ukraine. The reporting is from various Ukrainian sources including a press release from the impacted company Kyivenergo confirming that there was an unintentional outage and that they took actions to restore operations.

Analysis:
The first 24 and often 48 hours of reporting are notoriously bad for OSINT analysts but still should be utilized. Simply leverage caution and do not present information as facts yet. At this point I would assess with low confidence that the cyber attack has occurred. This is not to say there is doubt around the event only that there are other theories that have equal weighting until more evidence is available.  However, based on the sourcing of the information (internal Ukraine sources) and the Ukrainian grid operators’ experience dealing with a similar situation last year I have a higher trust level of the sources (thus the low confidence assessment that the attack is real). We will learn more later and it may be revealed that the outage was not related to a cyber attack; however I am aware of an investigation on going by Ukrainian authorities and they are treating the leading theory for the outage as a cyber attack.  I will caution again though that no one with direct knowledge of the attack has confirmed that it is a cyber attack; only that it is the leading theory and the disconnect was unintentional.

What Should Be Done:
Right now the best actions for those not on the ground or working at infrastructure companies is to wait and see if more information is revealed. Journalists should be cautious to infer or jump to conclusions and those in security community should stay tuned for more information. I would recommend journalists contact sources in the area but realize that the information is very preliminary and those not on the ground in Ukraine will have very little to add to knowledge on the situation.

If you are in the infrastructure (ICS/SCADA) security community it would be wise to use established channels to send decision makers a situational awareness report on the news; I would note it’s a low confidence assessment currently due to lack of first hand evidence but that it is a situation worth watching. This should be paired with security staff taking an active defense posture of monitoring the ICS network looking for abnormal activity. Preliminary information from the investigation underway by the Ukrainian authorities indicates that a remote attack is suspected.  I would stay far away from linking this to the Sandworm attack currently (attribution right now is not possible) but I would review the methods they achieved the remote attack on Ukraine last year and use that information to hunt for threats. As an example, look in logs for abnormal VPN session length, increased frequency of use, and unusual connection requests times.

If you happen to be a customer of Dragos, Inc. you will have received a notification already with some recommendations for strategic, operational, and tactical level players. Check your portal and be on the look out for a briefing request coming from us if you would like to attend remotely. For the wider community ensure that you are wary of phishing attempts taking advantage of this possible attack.

In Closing:
My chief recommendation is for everyone to avoid alarmism and utilize this as an opportunity to review logs and information from the ICS and search TTPs we’ve seen before such as remote usage of the ICS through legitimate accounts, VPNs, and remote desktop capabilities. If this attack turns out to be true it is unlikely it will be anything that is novel that couldn’t have been detected. It’s important to remember that defense is doable – now go do it.

Threats of Cyber Attacks Against Russia: Rationale on Discussing Operations and the Precedent Set

November 6, 2016

Reports that the U.S. government has military hackers ready to carry out attacks on Russian critical infrastructure has elicited a wide range of responses on social media. After I tweeted the NBC article a number of people responded with how stupid the U.S. was for releasing this information, or what poor OPSEC it was to discuss these operations, and even how this constitutes an act of war. I want to use this blog to put forth some thoughts of mine on those specific claims. However, I want to note in advance this is entirely my opinion. I wouldn’t consider this quality analysis or even insightful commentary but instead just my thoughts on the matter that I felt compelled to share since I work in critical infrastructure cyber security and was at one point a “military hacker.”

The Claim

The claim stems from an NBC article and notes that a senior U.S. intelligence official shared top-secret documents with NBC News. These top-secret documents apparently indicated that the U.S. has “penetrated Russia’s electric grid, telecommunications networks and the Kremlin’s command systems, making them vulnerable to attack by secret American cyber weapons should the U.S. deem it necessary.” I’m going to make the assumption that this was a controlled leak given the way that it was presented. Additionally, I make this assumption because of the senior officials that were interviewed for the wider story including former NATO commander (ret) ADM James G. Stavridis and former CYBERCOM Judge Advocate (ret) COL Gary Brown who likely would not have touched a true “leak” driven story without some sort of blessing to do so. I.e. before anyone adds that this is some sort of mistake this was very likely authorized by the President at the request of senior officials or advisers such as the Director of National Intelligence or the National Security Council. The President is the highest authority for deeming material classified or not and if he decided to release this information it’s an authorized leak. Going off of this assumption let’s consider three claims that I’ve seen recently.

The U.S. is Stupid for Releasing This Information

It is very difficult to know the rationale behind actions we observe. This is especially true in cyber intrusions and attacks. If an adversary happens to deny access to a server, did they intend to or was it accidentally brought down while performing other actions? Did the adversary intend to leave behind references to file paths and identifying information or was it a mistake? These debates around intent and observations is a challenge for many analysts that must be carefully overcome. In this case it is no different.

Given the assumption that this is a controlled leak it was obviously done with the intention of one or more outcomes. In other words, the U.S. government wanted the information out and their rationale is likely as varied as the members involved. While discussing a “government” it’s important to remember that the decision was ultimately in the hands of individuals, likely two dozen at the most. Their recommendations, biases, views on the world, insight, experience, etc. all contribute to what they expect the output of this leak to manifest as. This makes it even more difficult to assess why a government would do something since it’s more important to know the key members in the Administration, the military, and the Intelligence Community and their motivations rather than the historical understanding of government operations and similar decisions. Considering the decision was likely not ‘stupid’ and more for some intended purpose let’s explore what two of those purposes might be:

Deterrence

I’m usually not the biggest fan of deterrence in the digital domain as it has since not been very effective and the qualities to have a proper deterrent (credible threat and an understood threshold) are often lacking. Various governments lament about red lines and actions they might do if those red lines are crossed but what exactly those red lines are and what the response action will be if they are crossed is usually never explored. Here however, the U.S. government has stated a credible threat: the disruption of critical infrastructure in Russia (the U.S. has shown before that they are capable of doing this). They have combined this with a clear threshold of what they do not want their potential adversary to do: do not disrupt the elections. For these reasons my normal skepticism around deterrence is lessened. However, in my own personal opinion this is potentially a side effect and not the primary purpose especially given the form of communication that was chosen.

Voter Confidence

Relations between Russia and the U.S. this election have been tense. Posturing and messaging between the two states has taken a variety of forms both direct and indirect. This release to NBC though is interesting as it would be indirect messaging if positioned to the Russian government but it would be direct messaging if intended for the U.S. voters. My personal opinion (read: speculation) is that it is much more intended for the voters. At one point in the article NBC notes that Administration officials revealed to them that they delivered a “back channel warning to Russia against any attempt to influence next week’s vote”. There’s no reason to reiterate a back channel message in a public article unless the intended audience (in this case the voters) weren’t aware of the back channel warning. The article reads as an effort by the Administration to tell the voters: “don’t worry and go vote, we’ve warned them that any effort to disrupt the elections will be met with tangible attacks instead of strongly worded letters.”

It’s really interesting that this type of messaging to the American public is needed. Cyber security has never been such a mainstream topic before especially not during an election. This may seem odd to those in the information security community who live with these discussions on a day to day basis anyway. But coverage of cyber security has never before been mainstream media worthy for consistent periods of time. CNN, Fox, MSNBC, and the BBC have all been discussing cyber security throughout the recent election season ranging from the DNC hacks to Hillary’s emails. That coverage has gotten fairly dark though with CNN, NBC, Newsweek, and New York Times articles like this one and prime time segments telling voters that the election could be manipulated by Russian spies.

This CNN piece directly calls out the Kremlin for potentially manipulating the elections in a way that combines it with Trump’s claims that the election is rigged. This is a powerful combination. There is a significant portion of Trump’s supporters who will believe his claim of a rigged election and in conjunction with the belief that Russia is messing with the election it’s easy to see how a voter could become disillusioned with the election. Neither the Democrats or Republicans want less voters to turn out and (almost) all of those on both sides want the peaceful transition of power after the election as has always occurred before. Strong messaging from the Administration and others into mainstream news media is important to restore confidence to voters both in the election itself as well as the manner to which people vote.

Unfortunately, it seems that this desire is being accidentally countered by some in the security community. In very odd timing, Cylance decided to release a press release on vulnerabilities in voting machines on the same day, unbeknownst to them, as the NBC article. The press release stated that the intent of the release was to encourage mitigation of the vulnerabilities but with 4 days until the election, as of the article’s release, that simply will not be possible. The move is likely very well intended but unlikely to give voters much confidence in the manner to which they vote. I’ll avoid a tangent here but it’s worth mentioning the impact security companies can play on larger political discussions.

The Leak is Bad OPSEC

I will not spend as much time on this claim as I did the previous but it is worth noting the reaction that releasing this type of information is bad operational security. Operational security is often very important to ensure that government operations can be coordinated effectively without the adversary having the advance warning required to defend against the operation. However, in this case the intention of the leak is likely much more around deterrence or voter confidence and therefore the operation itself is not the point. Keeping the operation secret would not have helped either potential goal. More importantly, compromising information systems is not something that has ever been see as insurmountably difficult. For the U.S. government to reveal that it has compromised Russian systems does not magically make them more secure now. Russian defense personnel do not have anything more to go off of than before in terms of searching for the compromise, they likely already assumed they were compromised, and looking for a threat and cleaning it up across multiple critical infrastructure industries and networks would take more than 4 days even if they had robust technical indicators of compromise and insight (which the leak did not give them). The interesting part of the disclosure is not the OPSEC but in the precedence it sets which I’ll discuss in the next section.

The Compromises are an Act of War

Acts of war are governed under United Nations’ Article 2(4) where it discusses armed conflict. The unofficial rules regarding war in cyberspace are contained in the Tallinn Manual. In neither of these documents is the positioning of capabilities to do future damage considered an act of war. More importantly, in the NBC article it notes that the “cyber weapons” have not been deployed yet: “The cyber weapons would only be deployed in the unlikely event the U.S. was attacked in a significant way, officials say.” Therefore, what is being discussed is cyber operations that have gained access to Russian critical infrastructure networks but not positioned “weapons” to do damage yet. Intrusions into networks have never been seen as an act of war by any of the countries involved in such operations. So what’s interesting about this?  The claim by officials that the U.S. had compromised Russian critical infrastructure networks including the electric grid years ago.

For years U.S. intelligence officials have positioned that Russian, Chinese, Iranian, and at times North Korean government operators have been “probing” U.S. critical infrastructure such as the power grid. The pre-positioning of malware in the power grid has long been rumored and has been a key concern of senior officials. The acknowledgment in a possibly intended leak that the U.S. has been doing the same for years now is significant. It should come as no surprise to anyone in the information security community but as messaging from senior officials it does set a precedent internationally (albeit small given that this is a leak and not a direct statement from the government). Now, if capabilities or intrusions were found in the power grid by the U.S. government in a way that was made public the offending countries could claim they were only doing the same as the U.S. government. In my personal experience, there is credibility to claims that other countries have been compromising the power grid for years so I would argue against the “U.S. started it” claim that is sure to follow.  The assumption is that governments try to compromise the power grid ahead of time so that when needed they can damage it for military or political purposes. But the specific compromises that have occurred have not been communicated publicly by senior officials nor have they been done with attribution towards Russia or China. The only time a similar specific case was discussed with attribution was against Iran for compromising a small dam in New York and the action was heavily criticized by officials and met with a Department of Justice indictment.  Senior officials’ acknowledgment of U.S. cyber operations compromising foreign power grids for the purpose of carrying out attacks if needed is unique and a message likely heard loudly even if later denied. It would be difficult to state that the leak will embolden adversaries to do this type of activity if they weren’t already but it does in some ways make the operations more legitimate. Claiming responsibility for such compromises while indicting countries for doing the same definitely makes the U.S. look hypocritical regardless of how its rationalized.

Parting Thoughts

My overall thought is that this information was a controlled leak designed to help voters feel more confident in terms of both going to cast their ballots and in the overall outcome. Some level of deterrence was likely a side effect that the Administration sought. But no, this was not simply a stupid move nor was it bad OPSEC or an act of war. I also doubt it is simply a bluff. However, there is some precedent set and pre-positioning access to critical infrastructures around the world just became a little more legitimate.

One thing that struck me as new in the article though was the claim that the U.S. military used cyber attacks to turn out the lights temporarily in Baghdad during the 2003 Iraq invasion. When considering the officials interviewed for the story and the nature of the (again, possibly) controlled leak that is a new claim from senior government officials. There was an old rumor that Bush had that option on the table when invading Iraq but the rumor was the attack was cancelled for fear of the collateral damage of taking down a power grid. One can never be sure how long “temporary” might be when damaging such infrastructure. The claim in the article that the attack actually went forward would make that the first cyber attack on a power grid that led to outages – not the Ukrainian attack of 2015 (claims of a Brazilian outage years earlier were never proven and seem false from available information). However, the claim is counter to reports at the time that power outages did not occur during the initial hours of the invasion. Power outages were reported in Iraq but after the ending of active combat operations and looters were blamed. If a cyber attack in Iraq ever made sense militarily it would not have made as much sense after the initial invasion.

I’ve emailed the reporter of the story asking what the source of that claim was and I will update the blog if I get an answer. It is possible the officials stated this to the reporters but misspoke. In my time in the government it was not a rare event for senior officials to confuse details of operations or hear myths outside of the workplace and assume them to be true. Hopefully, I can find out more as that is a historically significant claim. Based on what is known currently I am skeptical that outages following the initial Iraq invasion in 2003 were due to a cyber attack.

Common Analyst Mistakes and Claims of Energy Company Targeting Malware

July 13, 2016

A new blog post by SentinelOne made an interesting claim recently regarding a “sophisticated malware campaign specifically targeting at least one European energy company.”  More extraordinary though was the claim by the company that this find might indicate something much more serious: “which could either work to extract data or insert the malware to potentially shut down an energy grid.” While that is a major analytical leap, we’ll come back to this, the next thing to occur was fairly predictable – media firms spinning up about a potential nation-state cyber attack on power grids.

I have often critiqued news organizations in their coverage of ICS/SCADA security when there was a lack of understanding of the infrastructure and its threats but this sample of hype originated from SentinelOne’s bold claims and not the media organizations. (Although I would have liked to see the journalists validate their stories more). News headlines included “Researchers Found a Hacking Tool that Targets Energy Grids on the Dark Web” to EWeek’s “Furtim’s Parent, Stuxnet-like Malware, Aimed at Energy Firms.” It’s always interesting to see how long it takes for an organization to compare malware to Stuxnet. This one seems to have won the race in terms of “time-to-Stuxnet”, but the worst headline was probably The Register’s with “SCADA malware caught infecting European energy company: Nation-state fingered”. No this is not SCADA malware and no nation-states have been fingered (phrasing?).

The malware is actually not new though and had been detected before the company’s blog post. The specific sample SentinelOne linked to, that they claim to have found, was first submitted to VirusTotal by an organization in Canada on April 21st, 2016. Later, a similar sample was identified and posted on the forum KernelMode.info on April 25th, 2016 (credit to John Franolich for bringing it to my attention). On May 23rd, 2016 a KernelMode forum user posted on their blog some great analysis of the malware. The KernelMode users and blogger identified that one of the malware author’s command and control servers was misconfigured and revealed a distinct naming convention in the directories that very clearly seemed to correlate to infected targets. In total there were over 15,000 infected hosts around the world that had communicated to this command and control server. This puts a completely different perspective on the malware that SentinelOne claimed was specifically targeting an energy company and it’s obvious it is most certainly not ICS/SCADA or energy company specific. It’s possible energy companies are a target, but so far there’s no proof of that provided.

I do not have access to the dataset that SentinelOne has so I cannot and will not critique them on all of their claims. However, I do find a lot of the details they have presented odd and I also do not understand their claims that they “validated this malware campaign against SentinelOne [their product] and confirmed the steps outlined below [the malware analysis they showed in their blog] were detected by our Dynamic Behavior Tracking (DBT) engine.” I’m all for vendors showcasing where their products add value but I’m not sure how their product fits into something that was submitted to VirusTotal and a user forum months before their blog post. Either way, let’s focus on the learning opportunities here to help educate folks on potential mistakes to avoid.

Common Analyst Mistake: Malware Uniqueness

A common analyst mistake is to look at a dataset and believe that malware that is unique in their dataset is actually unique. In this scenario, it is entirely possible that with no ill-intention whatsoever SentinelOne identified a sample of the malware independent from the VirusTotal and user forum submission. Looking at this sample and not having seen it before the analysts at the company may have made the assumption that the malware was unique and thus warranted their statement that this campaign was specifically targeting an energy company. The problem is, as analysts we always work off of incomplete datasets. All intelligence analysis operates from the assumption that there is some data missing or some unknowns that may change a hypothesis later on. This is one reason you will often find intelligence professionals give assessments (high, medium, or low confidence assessments usually) rather than making definitive statements. It is important to try to realize the limits of our datasets and information by looking to open source datasets (such as searching on Google to find the previous KernelMode forum post in this scenario) or establishing trust relationships with peers and organizations to share threat information. In this scenario the malware was not unique and determining that there were at least 15,000 victims in this campaign would add doubt that a specific energy company was the target of the campaign. Simply put, more data and information was needed.

Common Analyst Mistake: Assuming Adversary Intent

As analysts we often get familiar with adversary campaigns and capabilities to an almost intimate level knowing details ranging from behavioral TTPs to the way that adversaries run their operations. But one thing we as analysts must be careful of is assuming an adversary’s intent. Code, indicators, TTPs, capabilities, etc. can reveal a lot. They can reveal what an adversary may be capable of doing and they should reveal the potential impact to a targeted organization. It is far more difficult though to determine what an adversary wishes to do. If an adversary crashes a server an analyst may believe the malicious actor wanted to deny service to it whereas the actor just messed up. In this scenario the SentinelOne post stopped short of claiming to know what the actors were trying to do (I’ll get to the power grid claims in a following section) but the claim that the adversary specifically targeted the European energy company is not supported anywhere in their analysis. They do a great job of showing malware analysis but do not offer any details around the target nor how the malware was delivered. Sometimes, malware infects networks that are not even the adversary’s target. Assuming the intent of the adversary to be inside specific networks or to take specific actions is a risky move and even worse with little to no evidence.

Common Analyst Mistake: Assuming “Advanced” Means “Nation-State”

It is natural to look at something we have not seen before in terms of tradecraft and tools and assume it is “advanced.” It’s a perspective issue based on what the analyst has seen before. It can lead to analysts assuming that something particularly cool must be so advanced that it’s a nation-state espionage operation. In this scenario, the SentinelOne blog authors make that claim. Confusingly though, they do not seem to have even found the malware on the energy company’s network they referenced. Instead, the SentinelOne blog authors claimed to have found the malware on the “dark web”. This means that there would not have been accompanying incident response data or security operations data to support a full understanding of this intrusion against the target, if we assume the company was a target. There are non-nation-states that run operations against organizations. HackingTeam was a perfect example of a hackers-for-hire organization that ran very well-funded operations. SentinelOne presents some interesting data and along with other data sets this could reveal a larger campaign or even potentially a nation-state operation – but nothing presented so far supports that conclusion right now. A single intrusion does not make a campaign and espionage type activity with “advanced” capabilities does not guarantee the actors work for a nation-state.

Common Analyst Mistake: Extending Expertise

When analysts become experts on their team in a given area it is common for folks to look to them as experts in a number of other areas as well. As analysts it’s useful to not only continually develop our professional skills but to challenge ourselves to learn the limits of our expertise. This can be very difficult when others look to us for advice on any given subject. But being the smartest person in the room on a given subject does not mean that we are experts on it or even have a clue of what we’re talking about. In this scenario, I have no doubt that the SentinelOne blog authors are very qualified in malware analysis. I do however severely question if they have any experience at all with industrial and energy networks. The claim that the malware could be used to “shut down an energy grid” shows a complete lack of understanding of energy infrastructure as well as a major analytical leap based on a very limited data set that is quite frankly inexcusable. I do not mean to be harsh, but this is hype at its finest. At the end of their blog the authors note that if anyone in the energy sector would like to learn more that they can contact the blog authors directly. If anyone decides to take them up on the offer, please do not assume any expertise in that area, be critical in your questions, and realize that this blog post reads like a marketing pitch.

Closing Thoughts

My goal in this blog post was not to critique SentinelOne’s analysis too much, although to be honest I am a bit stunned by the opening statement regarding energy grids. Instead, it was to take an opportunity to identify some common analyst mistakes that we all can make. It is always useful to identify reports like these and without malice to tear apart the analysis presented to identify knowledge gaps, assumptions, biases, and analyst mistakes. Going through this process can help make you a better analyst. In fairness though, the only reason I know a lot about common analyst mistakes is because I’ve made a lot of rookie mistakes at one point or another in my career. We all do. The trick is usually to try not to make a public spectacle out of it.

Hype, Logical Fallacies, and SCADA Cyber Attacks

May 25, 2016

For a few years now I’ve spent some energy focusing on calling out hyped up stories of threats to critical infrastructure and dispelling false stories about cyber attacks. There have been a lot of reasons for this including trying to make sure that the investments we make in the community go against real threats and not fake ones. This helps ensure we identify the best solutions for our problems. One of the chief reasons though is that as an educator both as an Adjunct Lecturer in the graduate program at Utica College and as a Certified Instructor at the SANS Institute I have found the effects of false stories to be far reaching. Ideally, most hype will never make it into serious policy or security debates (unfortunately some does though). But it does miseducate many individuals entering this field. It hampers their learning when their goals are often just to grow themselves and help others – and I take offense to that. In this blog I want to focus on a new article that came out on the blog at the Huffington Post titled “The Growing Threat of Cyber-Attacks on Critical Infrastructure” by Daniel Wagner, CEO of Country Risk Solutions.  I don’t want to focus on deriding the story though and instead use the story to highlight a number of informal logical fallacies. Being critical of information presented as facts without supporting evidence is a critical skill for anyone in this field. Using case-studies such as this are exceptionally important to help educate on what to be careful to avoid.

Misinformation

Mr. Wagner’s article starts off simply enough with the premise that cyber attacks are often unreported or under-reported leading the public to not fully appreciating the scope of the threat. I believe this to be very true and a keen observation. However, the article then uses a series of case studies each with factual errors as well as conjecture stated as facts. Before examining the fallacies let’s look at one of the first claims which pertains to the cyber attack on the Ukrainian power grid in December of 2015:

“It now seems clear, given the degree of sophistication of the intrusion, that the attackers could have rendered the system permanently inoperable.”

It is true that the attackers showed sophistication in their coordination and ability to carry out a well-planned operation. A full report on the attacker methodology can be found here. However, there is no proof that the attackers could have rendered that portion of the power grid permanently inoperable. In the two cases where an intentional cyber attack caused physical damage to the systems involved, the German steel works attack and the Stuxnet attack on Natanz, both systems were recoverable. This type of physical damage is definitely concerning but the attackers did not display the sophistication to achieve that type of attack and even if they had there is no evidence to show that the system would be permanently inoperable. It is an improbable scenario and would need serious proof to support that claim.

Informal Logical Fallacies

The next claim though is the most egregious and contains a number of informal logical fallacies that we can use as educational material.

“The Ukraine example was hardly the first cyber-attack on a SCADA system. Perhaps the best known previous example occurred in 2003, though at the time it was publicly attributed to a downed power line, rather than a cyber-attack (the U.S. government had decided that the ‘public’ was not yet prepared to learn about such cyber-attacks). The Northeast (U.S.) blackout that year caused 11 deaths and an estimated $6 billion in economic damages, having disrupted power over a wide area for at least two days. Never before (or since) had a ‘downed power line’ apparently resulted in such a devastating impact.”

This claim led to EENews reporter Black Sobczak to call out the article on Twitter which brought it to my attention. I questioned the author (more on that below) but first let’s dissect this claim as there are multiple fallacies here.

First, the author claims that the 2003 blackout was caused by a cyber attack. This is contrary to what is known currently about the outage and is contrary to the official findings of the investigators which may be read here. What Daniel Wagner has done here is a great example of Onus probandi also known as “burden of proof” fallacy. The type of claim that is made is most certainly not common knowledge and is contrary to what is known about the event. So the claimer should provide proof. Yet, the author does not which puts the burden of finding the proof on the reader and more specifically anyone who would disagree with the claim including the authors of the official investigation report.

Second, Daniel Wagner states that the U.S. government knew the truth of the attack and decided that the public was not ready to learn about such attacks. He states this as a fact again without proof but there’s another type of fallacy that can apply here called the historian’s fallacy. In essence, Mr. Wagner obviously believes that a cyber attack was responsible for the 2003 blackouts. Therefore, it is absurd to him that the government would not also know and therefore the only reasonable conclusion is that they hid it from the public. Even if Mr. Wagner was correct in his assessment, which he is not, he is applying his perspective and understanding today on the decision makers of the past. Or more simply stated, his is using what information he believes he has now and is judging the government’s decision on that information which they likely did not have at the time.

Third, the next claim is a type of red herring fallacy known as the straw man fallacy where an argument is misrepresented to make it easier to argue against. Mr. Wagner puts in quotes that a downed powerline was responsible for the outage and notes that a downed line has never been the reason for such an impact before or since. The findings of the investigation into the blackouts did not conclude that the outages occurred simply due to a downed power line though. The investigators put forth numerous findings which fell into four broad categories: inadequate system understanding, inadequate situational awareness, inadequate tree trimming, and inadequate diagnostic support amongst interconnected partners. Although trees were technically involved in one element it was a single variable in a complex scenario and mismanagement of a difficult situation. In addition, the ‘downed power lines’ mentioned were high energy transmission lines far more important than implied in the argument.

Mr. Wagner went on to use some other fallacies such as the informal fallacy of false authority when he cited, incorrectly by the way, Dell’s 2015 Annual Security Report. He cited the report to state that cyber attacks against supervisory control and data acquisition (SCADA) systems doubled to more than 160,000 attacks in 2014. When this statistic came out it was immediately questioned. Although Dell is a good company with many areas of expertise its expertise and insight into SCADA networks was called into question. Just because an authority is expert in one field such as IT security does not mean they are an expert in a different field such as SCADA security. There have only been a handful of known cyber attacks against critical infrastructure. The rest of the cases are often mislabeled as cyber attacks and are in the hundreds or thousands – not hundreds of thousands. Examples of realistic metrics are provided by more authoritative sources such as the ICS-CERT here.

Beyond his article though there was an interesting exchange on Twitter which I will paste below.


1

 


 

 

2


 

3

 


 

In the exchange we can see that Mr. Wagner makes the argument “what else could it have been? Seriously”. This is simultaneously a burden of proof fallacy requiring Blake or myself to provide evidence disproving his theory as well as an argument from personal incredulity. An argument from personal incredulity is a type of informal fallacy where a person cannot imagine how a scenario or statement could be true and therefore believes it must be false. Mr. Wagner took my request for proof of his claim as absurd because he believed that there was no other rational explanation for the blackouts other than a cyber attack.

I would link to the tweets directly but after my last question requesting proof Mr. Wagner blocked Blake and me.

Conclusion

Daniel Wagner is not the only person to write using informal fallacies. We all do it. The trick is to identify it and try to avoid it. I did not feel my request for proof ended up being a fruitful exchange with the author but that does not make Mr. Wagner a bad person. Everyone has bad days. It’s also entirely his right not to continue our discussion. The most important thing here though is to understand that there are a lot of baseless claims that make it into mainstream media that misinform the discussion on topics such as SCADA and critical infrastructure security. Unsurprisingly they often come from individuals without any experience in the field to which they are writing about. It is important to try to identify these claims and learn from them. One effective method is to look for fallacies and inconsistencies. Of course, always be careful to not be so focused on identifying fallacies that you dismiss the claim too hastily. That would be a great example of an argument from fallacy, also known as the fallacy fallacy, where you analyze an argument and because it contains a fallacy you conclude it must be false. Mr. Wagner’s claims are not false because of how they were presented. The claims were not worth considering because of the lack of evidence, the fallacies just helped draw attention to that.

Context for the Claim of a Cyber Attack on the Israeli Electric Grid

January 26, 2016

This blog was first posted on the SANS ICS blog here.

 

Dr. Yuval Steinitz, the Minister of National Infrastructure, Energy, and Water resources, announced today at the CyberTech Conference in Tel Aviv that a “severe cyber attack” was ongoing on the Israel National Electric Authority. His statements were delivered as a closing session at the conference and noted that a number of computers at the Israeli electricity authorities had been taken offline to counter the incidentthe previous day.

There are few details that have been offered and thus it is far too early for any detailed analysis. However, this blog post attempts to add some clarity to the situation with context in how this type of behavior has been observed in the past.

First, Dr. Steinitz mentioned that computers had been taken offline. This discussion around the choice by the defenders to take systems offline indicates a normal procedure in terms of incident response and malware containment. The intention of the incident responders cannot be known at this time but this activity is consistent with standard procedures for cleaning malware off of infected systems and attempting to contain an infection so that it cannot spread to other systems. Taking systems offline is not preferable but the fact that systems were removed from the network does not necessarily make the incident more severe. On the contrary, this indicates that incident responders were able to respond early enough with planned procedures to counter the incidentprior to an impact.

Second, there have so far been no outages reported or any such impact of the “attack” quantified. It appears, only from what has been reported so far, that the use of the term “cyber attack” here is very liberal. Malware infections in industrial control system (ICS) networks are not uncommon. Many of these environments use traditional information technology systems such as Windows operating systems to host applications such as human machine interfaces (HMI) and data historians. These types of systems are as vulnerable, if not more so, than traditional information technology systems and malware infections are not novel. With regards to historical case studies it is far more common for incidental malware to lead to system failures than targeted attacks. For example, the Slammer malware reportedly caused slow downs in the Davis-Besse nuclear power plant’s networks and crashed a utility’s supervisory control and data acquisition (SCADA) networkin 2003. However, in terms of targeted/intentional intrusions leading to outages we only have three validated public case studies: Stuxnet, the German Steelworks facility, and the Ukrainian power grid. It is these targeted intrusions where an outage occurred that could be considered an attack. Often times people unintentionally abuse the phrase “cyber attack” when it is more appropriate to classify the activity as adversary intrusions, compromises, or espionage activity. To understand what constitutes an actual attack it is helpful to read theICS Cyber Kill Chain.

Third, there has been an increased focus on cyber security in Israel both as it relates to the cyber security of national infrastructure and in the technology companies that are making Israel an enticing locationforventure capital funding. In January, Israeli Prime Minister Benjamin Netanyahu gave a presentation to the World Economic Forum where the center of his discussion was cyber security. This was followed by a Februaryannouncement that the Cabinet in Israel approved a plan for a comprehensivenational cyber defense authority. With the increased focus on cyber security it is entirely possible that Israel had taken a proactive approach to looking through their infrastructure networks to identify threats. In the course of this action it may have found malware that may be targeted or incidental in nature. In either case, from what is being reported right now it appears unlikely that this is an actual attack and more likely it is the discovery of malware. However, it is important to watch for any developmentsin what is being reported.

Israel has threats that it must consider on a day-to-day basis. Critical infrastructure is constantly the focus of threats as well although there are a lack of validated case-studies to uncover the type of activity much of the community feels is going on in large quantities. However, reports of cyber attacks must be met with caution and demands for proof due to the technical and cultural challenges that face the ICS security community. Simply put, there is a lack of expertise in the quantity required alongside the type of data needed to validate and assess all of the true attacks on infrastructure while appropriately classifying lesser events. Given the current barriers present in the ICS community the claims of attacks should be watched diligently, taken seriously, but approached with caution and investigated fully.

Security Awareness and ICS Cyber Attacks: Telling the Right Story

October 7, 2015

This was first posted on the SANS ICS blog here.

 

A lack of security awareness and the culture that surrounds security is a widely understood problem in the cyber security community. In the ICS community this problem is impactful towards operations and understanding the scope of the threats we face. A recent report by the Chatham House titled “Cyber Security at Civil Nuclear Facilities”shined a light on these issues in the nuclear industry through an 18 month long project.

The report highlights a number of prevailing problems in the nuclear sector that make security more difficult; the findingsdo not represent all nuclear sector entities but take a look at the sector as a whole. Friction between IT and OT personnel, the prevailing myth that the air gap is an effective single security solution, and a lack of understanding the problem are all cited as major findings of the research group.

The group recommends a number of actions which need to be taken and these can be mapped along the Sliding Scale of Cyber Security. A big focus is placed on better designing the systems to have security built into them which can be understood in the Architecture phase of the scale. Another focus was on leveraging whitelisting and intrusion detection systems as well as other Passive Defense mechanisms instead of just an air gap. Lastly, one of the most significant recommendations was towards getting more personnel trained in cybersecurity practices (SANS offers ICS410 and ICS515 to address these types of concerns) and take a proactive approach versus a reactive approach towards finding threats in the environment — this recommendations maps to the Active Defense component of the scale which focuses on empowering analysts and security personnel to hunt for and respond to threats.

One of the more interesting major recommendations put forth by the report was:
“The infrequency of cyber security incident disclosure at nuclear facilities makes it difficult to assess the true extent of the problem and may lead nuclear industry personnel to believe that there are few incidents. Moreover, limited collaboration with other industries or information-sharing means that the nuclear industry tends not to learn from other industries that are more advanced in this field.”

At SANS we have consistently observed this as an issue in the wider community and try to bring the community together with events such as the ICS Summit to help address the concernand promote community sharing. No single event or effort alone though can fix the problem. A lack of information sharing and incident disclosure has led to a false sense of security while also allowing fake or hyped up stories in news media to become the representation of our industry to people in our community and external to it.

This aspect of infrequency of cyber security incident disclosure can be observed in multiple places. As an example, an article from 2014 by Inside Energy compiled incident reporting to the Department of Energy about electric grid outages and over 15 years noted that there were 14 incidents related to a cyber event. The earliest cyber attack was identified in 2003 but then there was a lack of events until 2011-2014 which made up the other 13 cases. It should be noted that the reporting for a cyber attack was any type of unauthorized access to the system including the hardware, software, and data.

We in the industry need to have better data so that we can more fully understand and categorize attacks along models such as the ICS Cyber Kill Chain to extract lessons learned. What is revealing about the Department of Energydata though is the lack of visibility into the ICS networked environment. As an example, in the data set there is a measured understanding of impact for physical attacks, fires, storms, etc. showing great visibility into the ICS as a whole but for every single event regarding cyber the impact was either labeled as zero or unknown; that in combination with no data for 2003-2011 is less representative of the number of events and more representative of missing data. It has become clear over the years that a significant number of ICS organizations do not have personnel that are trained and empowered to look into the network to find threats. This must change and the findings must be shared, anonymously and appropriately, with the community if we are ever to scope the true threat in the community and determine the appropriate resource investments and responses to address the issues.

The ICS community stands a unique opportunity to have our story told by our ICS owners, operators, and security personnel to understand and address the problem ourselves. Valuable compilations of data such as that by Inside Energy using the Department of Energy reports as well as the Chatham House report help reinforce this need. Without involvement from the community, the ICS security story will be told by others who may not have the appropriate experience to make the right conclusions and offer helpful solutions. The need for cyber security will influence change in the ICS community through national level policies, regulations, vendor practices, and culture shifts – it is imperative that the right people with real data are writing the story that will drive those changes.