Browsing Tag

hype

Marketing ICS Vulnerabilities and POC Malware – You’re Doing it Wrong

April 30, 2017

There’s been two cases recently of industrial control system (ICS) security firms identifying vulnerabilities and also creating proof of concept (POC) malware for those vulnerabilities. This has bothered me and I want to explore the topic in this blog post; I do not pretend there is a right or wrong answer here but I recognize by even writing the post I am passing judgement on the actions and I’m ok with that. I don’t agree with the actions and in the interest of a more public discussion below is my rationale.

Background:

At the beginning of April 2017 CRITIFENCE an ICS security firm published an article on Security Affairs titled “ClearEnergy ransomware aim to destroy process automation logics in critical infrastructure, SCADA, and industrial control systems.” There’s a good overview of the story here that details how it ended up being a media stunt to highlight vulnerabilities the company found. The TL;DR version of the story is that the firm wanted to highlight the vulnerabilities they found in some Schneider Electric equipment that they dubbed ClearEnergy so they built their own POC malware for it that they also dubbed ClearEnergy. But they published an article on it leaving out the fact that it was POC malware. Or in other words, they led people to believe that this was in-the-wild (real and impacting organizations) malware. I don’t feel there was any malice by the company, as soon as the article was published I reached out to the CTO of CRITIFENCE and he was very polite and responded that he’d edit the article quickly. I wanted to write a blog calling out the behavior and what I didn’t like about it as a learning moment for everyone but the CTO was so professional and quick in his response that I decided against it. However, after seeing a second instance of this type of activity I decided a blog post was in order for a larger community discussion.

On April 27th, 2017 Security Week published an article titled “New SCADA Flaws Allow Ransomware, Other Attacks” based on a presentation by ICS security firm Applied Risk at SecurityWeek’s 2017 Signapore ICS Cyber Security Conference. The talk, and the article, highlighted ICS ransomware that the firm dubbed “Scythe” that targets “SCADA devices.” Applied Risk noted that the attack can take advantage of a firmware validation bypass vulnerability and lock out folks’ ability to update to new firmware. The firm did acknowledge in their presentation though and in the article that this too was POC malware.

 

 

Scythe

Figure: Image from Applied Risk’s POC Malware

Why None of this is Good (In My Opinion):

In my opinion both of these firms have been irresponsible in a couple of ways.

First, CRITIFENCE obviously messed up by not telling anyone that ClearEnergy was POC malware. In an effort to promote their discovery of vulnerabilities they were quick to write an article and publish it and that absolutely contributed to hype and fear. Hype around these types of issues ultimately leads to the ICS community not listening to or trusting the security community (honestly with good reason). However, what CRITIFENCE did do that I liked (besides being responsive which is a major plus) was work through a vulnerability disclosure process that led to proper discussion by the vendor as well as an advisory through the U.S.’ ICS-CERT. In contrast, Applied Risk did not do that so far as I can tell. I do not know what all Applied Risk is doing about the vulnerabilities but they said they contacted the vendors and two of the vendors (according to the SecurityWeek article) acknowledged that the vulnerability is important but difficult to fix. The difference with the Applied Risk vulnerabilities is that the community is left unaware of what devices are impacted, the vendors haven’t been able to address the issues yet, and there are no advisories to the larger community through any appropriate channel. Ultimately, this leaves the ICS community in a very bad spot.

Second, CRITIFENCE and Applied Risk are both making a marketing spectacle out of the vulnerabilities and POC malware. Now, this point is my opinion and not necessarily a larger community best-practice, but I absolutely despise seeing folks name their vulnerabilities or naming POC malware. It comes off as a pure marketing stunt. Vulnerabilities in ICS are not uncommon and there’s good research to be done. Sometimes, the things the infosec community sees as vulnerabilities may have been designed that way on purpose to allow things like firmware updates and password resets for operators who needed to get access to sensitive equipment in time-sensitive scenarios. I’m not saying we can’t do better – but it’s not like the engineering community is stupid (far from it) and highlighting vulnerabilities as marketing stunts can often have unintended consequence including the vendors not wanting to work with researchers or disclose vulnerabilities. There’s no incentive for ICS vendors to work with firms who are going to use issues in their products for marketing for a firm’s security product.

Third, vulnerability disclosure can absolutely have the impact of adversaries learning how to attack devices in ways they did not know previously. I do not advocate for security through obscurity but there is value in following a strict vulnerability disclosure policy even in normal IT environments because this has been an issue for decades. In ICS environments, it can be upwards of 2-3 years for folks to be able to get a patch and apply it after a vulnerability comes out. That is not due to ignorance of the issue or lack of concern for the problem but due to operations constraints in various industries. So in essence, the adversaries get informed about how to do something they previously didn’t know about while system owners can’t accurately address the issues. This makes vulnerability disclosure in the ICS community a very sensitive topic to handle with care. Yelling out to the world “this is the vulnerability and oh by the way here’s exactly how you should leverage it and we even created some fake malware to highlight the value to you as an attacker and what you can gain” is just levels of ridiculousness. It’s why you’ll never see my firm Dragos or myself or anyone on my team finding and disclosing new vulnerabilities in ICS devices to the public. If we ever find anything it’ll only be worked through the appropriate channels and quietly distributed to the right people – not in media sites and conference presentations. I’m not a huge fan of disclosing vulnerabilities at conferences and in media in general but I do want to acknowledge that it can be done correctly and I have seen a few firms (Talos, DigitalBond, and IOActive come to mind) do it very well. As an example, Eireann Leverett and Colin Cassidy found vulnerabilities in industrial Ethernet switches and worked closely with the vendors to address them. After working through a very intensive process they wanted to do a series of conference presentations about them to highlight the issues. They invited me to take part to show what could be done from a defense perspective. So I stayed out of the “here’s the vulnerabilities” and instead focused on “these exists so what can defenders do besides just patch.” I took part in that research because the work was so important and Eireann and Colin were so professional in how they went about it. It was a thrill to use the entire process as a learning opportunity to the larger community. Highlighting vulnerabilities and creating POC malware for something that doesn’t even have an advisory or where the vendor hasn’t made patches yet just isn’t appropriate.

Closing Thoughts:

There is a lot of research to be done into ICS and how to address the vulnerabilities that these devices have. Vendors must get better at following best-practices for developing new equipment, software, and networking protocols. And there are good case-studies of what to do and how to carry yourself in the ICS security researcher community (Adam Crain and Chris Sistrunk’s research into DNP3 and all the things that it led to is a fantastic example of doing things correctly to address serious issues). But the focus on turning vulnerabilities into marketing material, discussing vulnerabilities in media and at conferences before vendors have addressed them and before the community can get an advisory through proper channels, and creating/marketing POC malware to draw attention to your vulnerabilities is simply, in my opinion, irresponsible.

Try these practices instead:

  • Disclose vulnerabilities to the impacted vendors and work with them to address them
    • If they decide that they will not address the issue or do not see the problem talk to impacted asset owners/operators to ensure what you see as a vulnerability is an issue that will introduce risk to the community and use appropriate channels such as the ICS-CERT to push the vendor or develop defensive/detection signatures and bring it to the community’s attention; sometimes you’re left without a lot of options but make sure you’ve exhausted the good options first
  • After the advisory is available (for some time you feel comfortable with) if you or your firm would like to highlight the vulnerabilities at a conference or in the media that’s your choice
    • I would encourage focusing the discussion on what people can do besides just patch such as how to detect attacks that might leverage the vulnerabilities
  • Avoid naming your vulnerabilities, there’s already a whole official process for cataloging vulnerabilities
  • (In my opinion) do not make POC malware showing adversaries what they can do and why they should do it (the argument “the adversaries already know” is wrong in most cases)
  • If you decide to make POC malware anyway at least avoid naming it and marketing it (comes off as an extremely dirty marketing approach)
  • Avoid hyping up the impact (talking about power grids coming down and terrorist attacks in the same article is just a ridiculous attempt to illicit fear and attention)

In my experience, ICS vendors are difficult to work with at times because they have other priorities too but they care and want to do the right thing. If you are persistent you can move the community forward. But the vendors of the equipment are not the enemy and they will absolutely blacklist researchers, firms, and entire groups of folks for doing things that are adverse to their business instead of working with them. Research is important, and if you want to go down the route of researching and disclosing vulnerabilities there’s value there and proper ways to do it. If you’re interested in vulnerability disclosure best practices in the larger community check out Katie Moussouris who is a leading authority on bug bounty and vulnerability disclosure programs. But please, stop naming your vulnerabilities, building marketing campaigns around them, and creating fake malware because you don’t think you’re getting enough attention already.

Common Analyst Mistakes and Claims of Energy Company Targeting Malware

July 13, 2016

A new blog post by SentinelOne made an interesting claim recently regarding a “sophisticated malware campaign specifically targeting at least one European energy company.”  More extraordinary though was the claim by the company that this find might indicate something much more serious: “which could either work to extract data or insert the malware to potentially shut down an energy grid.” While that is a major analytical leap, we’ll come back to this, the next thing to occur was fairly predictable – media firms spinning up about a potential nation-state cyber attack on power grids.

I have often critiqued news organizations in their coverage of ICS/SCADA security when there was a lack of understanding of the infrastructure and its threats but this sample of hype originated from SentinelOne’s bold claims and not the media organizations. (Although I would have liked to see the journalists validate their stories more). News headlines included “Researchers Found a Hacking Tool that Targets Energy Grids on the Dark Web” to EWeek’s “Furtim’s Parent, Stuxnet-like Malware, Aimed at Energy Firms.” It’s always interesting to see how long it takes for an organization to compare malware to Stuxnet. This one seems to have won the race in terms of “time-to-Stuxnet”, but the worst headline was probably The Register’s with “SCADA malware caught infecting European energy company: Nation-state fingered”. No this is not SCADA malware and no nation-states have been fingered (phrasing?).

The malware is actually not new though and had been detected before the company’s blog post. The specific sample SentinelOne linked to, that they claim to have found, was first submitted to VirusTotal by an organization in Canada on April 21st, 2016. Later, a similar sample was identified and posted on the forum KernelMode.info on April 25th, 2016 (credit to John Franolich for bringing it to my attention). On May 23rd, 2016 a KernelMode forum user posted on their blog some great analysis of the malware. The KernelMode users and blogger identified that one of the malware author’s command and control servers was misconfigured and revealed a distinct naming convention in the directories that very clearly seemed to correlate to infected targets. In total there were over 15,000 infected hosts around the world that had communicated to this command and control server. This puts a completely different perspective on the malware that SentinelOne claimed was specifically targeting an energy company and it’s obvious it is most certainly not ICS/SCADA or energy company specific. It’s possible energy companies are a target, but so far there’s no proof of that provided.

I do not have access to the dataset that SentinelOne has so I cannot and will not critique them on all of their claims. However, I do find a lot of the details they have presented odd and I also do not understand their claims that they “validated this malware campaign against SentinelOne [their product] and confirmed the steps outlined below [the malware analysis they showed in their blog] were detected by our Dynamic Behavior Tracking (DBT) engine.” I’m all for vendors showcasing where their products add value but I’m not sure how their product fits into something that was submitted to VirusTotal and a user forum months before their blog post. Either way, let’s focus on the learning opportunities here to help educate folks on potential mistakes to avoid.

Common Analyst Mistake: Malware Uniqueness

A common analyst mistake is to look at a dataset and believe that malware that is unique in their dataset is actually unique. In this scenario, it is entirely possible that with no ill-intention whatsoever SentinelOne identified a sample of the malware independent from the VirusTotal and user forum submission. Looking at this sample and not having seen it before the analysts at the company may have made the assumption that the malware was unique and thus warranted their statement that this campaign was specifically targeting an energy company. The problem is, as analysts we always work off of incomplete datasets. All intelligence analysis operates from the assumption that there is some data missing or some unknowns that may change a hypothesis later on. This is one reason you will often find intelligence professionals give assessments (high, medium, or low confidence assessments usually) rather than making definitive statements. It is important to try to realize the limits of our datasets and information by looking to open source datasets (such as searching on Google to find the previous KernelMode forum post in this scenario) or establishing trust relationships with peers and organizations to share threat information. In this scenario the malware was not unique and determining that there were at least 15,000 victims in this campaign would add doubt that a specific energy company was the target of the campaign. Simply put, more data and information was needed.

Common Analyst Mistake: Assuming Adversary Intent

As analysts we often get familiar with adversary campaigns and capabilities to an almost intimate level knowing details ranging from behavioral TTPs to the way that adversaries run their operations. But one thing we as analysts must be careful of is assuming an adversary’s intent. Code, indicators, TTPs, capabilities, etc. can reveal a lot. They can reveal what an adversary may be capable of doing and they should reveal the potential impact to a targeted organization. It is far more difficult though to determine what an adversary wishes to do. If an adversary crashes a server an analyst may believe the malicious actor wanted to deny service to it whereas the actor just messed up. In this scenario the SentinelOne post stopped short of claiming to know what the actors were trying to do (I’ll get to the power grid claims in a following section) but the claim that the adversary specifically targeted the European energy company is not supported anywhere in their analysis. They do a great job of showing malware analysis but do not offer any details around the target nor how the malware was delivered. Sometimes, malware infects networks that are not even the adversary’s target. Assuming the intent of the adversary to be inside specific networks or to take specific actions is a risky move and even worse with little to no evidence.

Common Analyst Mistake: Assuming “Advanced” Means “Nation-State”

It is natural to look at something we have not seen before in terms of tradecraft and tools and assume it is “advanced.” It’s a perspective issue based on what the analyst has seen before. It can lead to analysts assuming that something particularly cool must be so advanced that it’s a nation-state espionage operation. In this scenario, the SentinelOne blog authors make that claim. Confusingly though, they do not seem to have even found the malware on the energy company’s network they referenced. Instead, the SentinelOne blog authors claimed to have found the malware on the “dark web”. This means that there would not have been accompanying incident response data or security operations data to support a full understanding of this intrusion against the target, if we assume the company was a target. There are non-nation-states that run operations against organizations. HackingTeam was a perfect example of a hackers-for-hire organization that ran very well-funded operations. SentinelOne presents some interesting data and along with other data sets this could reveal a larger campaign or even potentially a nation-state operation – but nothing presented so far supports that conclusion right now. A single intrusion does not make a campaign and espionage type activity with “advanced” capabilities does not guarantee the actors work for a nation-state.

Common Analyst Mistake: Extending Expertise

When analysts become experts on their team in a given area it is common for folks to look to them as experts in a number of other areas as well. As analysts it’s useful to not only continually develop our professional skills but to challenge ourselves to learn the limits of our expertise. This can be very difficult when others look to us for advice on any given subject. But being the smartest person in the room on a given subject does not mean that we are experts on it or even have a clue of what we’re talking about. In this scenario, I have no doubt that the SentinelOne blog authors are very qualified in malware analysis. I do however severely question if they have any experience at all with industrial and energy networks. The claim that the malware could be used to “shut down an energy grid” shows a complete lack of understanding of energy infrastructure as well as a major analytical leap based on a very limited data set that is quite frankly inexcusable. I do not mean to be harsh, but this is hype at its finest. At the end of their blog the authors note that if anyone in the energy sector would like to learn more that they can contact the blog authors directly. If anyone decides to take them up on the offer, please do not assume any expertise in that area, be critical in your questions, and realize that this blog post reads like a marketing pitch.

Closing Thoughts

My goal in this blog post was not to critique SentinelOne’s analysis too much, although to be honest I am a bit stunned by the opening statement regarding energy grids. Instead, it was to take an opportunity to identify some common analyst mistakes that we all can make. It is always useful to identify reports like these and without malice to tear apart the analysis presented to identify knowledge gaps, assumptions, biases, and analyst mistakes. Going through this process can help make you a better analyst. In fairness though, the only reason I know a lot about common analyst mistakes is because I’ve made a lot of rookie mistakes at one point or another in my career. We all do. The trick is usually to try not to make a public spectacle out of it.

Hype, Logical Fallacies, and SCADA Cyber Attacks

May 25, 2016

For a few years now I’ve spent some energy focusing on calling out hyped up stories of threats to critical infrastructure and dispelling false stories about cyber attacks. There have been a lot of reasons for this including trying to make sure that the investments we make in the community go against real threats and not fake ones. This helps ensure we identify the best solutions for our problems. One of the chief reasons though is that as an educator both as an Adjunct Lecturer in the graduate program at Utica College and as a Certified Instructor at the SANS Institute I have found the effects of false stories to be far reaching. Ideally, most hype will never make it into serious policy or security debates (unfortunately some does though). But it does miseducate many individuals entering this field. It hampers their learning when their goals are often just to grow themselves and help others – and I take offense to that. In this blog I want to focus on a new article that came out on the blog at the Huffington Post titled “The Growing Threat of Cyber-Attacks on Critical Infrastructure” by Daniel Wagner, CEO of Country Risk Solutions.  I don’t want to focus on deriding the story though and instead use the story to highlight a number of informal logical fallacies. Being critical of information presented as facts without supporting evidence is a critical skill for anyone in this field. Using case-studies such as this are exceptionally important to help educate on what to be careful to avoid.

Misinformation

Mr. Wagner’s article starts off simply enough with the premise that cyber attacks are often unreported or under-reported leading the public to not fully appreciating the scope of the threat. I believe this to be very true and a keen observation. However, the article then uses a series of case studies each with factual errors as well as conjecture stated as facts. Before examining the fallacies let’s look at one of the first claims which pertains to the cyber attack on the Ukrainian power grid in December of 2015:

“It now seems clear, given the degree of sophistication of the intrusion, that the attackers could have rendered the system permanently inoperable.”

It is true that the attackers showed sophistication in their coordination and ability to carry out a well-planned operation. A full report on the attacker methodology can be found here. However, there is no proof that the attackers could have rendered that portion of the power grid permanently inoperable. In the two cases where an intentional cyber attack caused physical damage to the systems involved, the German steel works attack and the Stuxnet attack on Natanz, both systems were recoverable. This type of physical damage is definitely concerning but the attackers did not display the sophistication to achieve that type of attack and even if they had there is no evidence to show that the system would be permanently inoperable. It is an improbable scenario and would need serious proof to support that claim.

Informal Logical Fallacies

The next claim though is the most egregious and contains a number of informal logical fallacies that we can use as educational material.

“The Ukraine example was hardly the first cyber-attack on a SCADA system. Perhaps the best known previous example occurred in 2003, though at the time it was publicly attributed to a downed power line, rather than a cyber-attack (the U.S. government had decided that the ‘public’ was not yet prepared to learn about such cyber-attacks). The Northeast (U.S.) blackout that year caused 11 deaths and an estimated $6 billion in economic damages, having disrupted power over a wide area for at least two days. Never before (or since) had a ‘downed power line’ apparently resulted in such a devastating impact.”

This claim led to EENews reporter Black Sobczak to call out the article on Twitter which brought it to my attention. I questioned the author (more on that below) but first let’s dissect this claim as there are multiple fallacies here.

First, the author claims that the 2003 blackout was caused by a cyber attack. This is contrary to what is known currently about the outage and is contrary to the official findings of the investigators which may be read here. What Daniel Wagner has done here is a great example of Onus probandi also known as “burden of proof” fallacy. The type of claim that is made is most certainly not common knowledge and is contrary to what is known about the event. So the claimer should provide proof. Yet, the author does not which puts the burden of finding the proof on the reader and more specifically anyone who would disagree with the claim including the authors of the official investigation report.

Second, Daniel Wagner states that the U.S. government knew the truth of the attack and decided that the public was not ready to learn about such attacks. He states this as a fact again without proof but there’s another type of fallacy that can apply here called the historian’s fallacy. In essence, Mr. Wagner obviously believes that a cyber attack was responsible for the 2003 blackouts. Therefore, it is absurd to him that the government would not also know and therefore the only reasonable conclusion is that they hid it from the public. Even if Mr. Wagner was correct in his assessment, which he is not, he is applying his perspective and understanding today on the decision makers of the past. Or more simply stated, his is using what information he believes he has now and is judging the government’s decision on that information which they likely did not have at the time.

Third, the next claim is a type of red herring fallacy known as the straw man fallacy where an argument is misrepresented to make it easier to argue against. Mr. Wagner puts in quotes that a downed powerline was responsible for the outage and notes that a downed line has never been the reason for such an impact before or since. The findings of the investigation into the blackouts did not conclude that the outages occurred simply due to a downed power line though. The investigators put forth numerous findings which fell into four broad categories: inadequate system understanding, inadequate situational awareness, inadequate tree trimming, and inadequate diagnostic support amongst interconnected partners. Although trees were technically involved in one element it was a single variable in a complex scenario and mismanagement of a difficult situation. In addition, the ‘downed power lines’ mentioned were high energy transmission lines far more important than implied in the argument.

Mr. Wagner went on to use some other fallacies such as the informal fallacy of false authority when he cited, incorrectly by the way, Dell’s 2015 Annual Security Report. He cited the report to state that cyber attacks against supervisory control and data acquisition (SCADA) systems doubled to more than 160,000 attacks in 2014. When this statistic came out it was immediately questioned. Although Dell is a good company with many areas of expertise its expertise and insight into SCADA networks was called into question. Just because an authority is expert in one field such as IT security does not mean they are an expert in a different field such as SCADA security. There have only been a handful of known cyber attacks against critical infrastructure. The rest of the cases are often mislabeled as cyber attacks and are in the hundreds or thousands – not hundreds of thousands. Examples of realistic metrics are provided by more authoritative sources such as the ICS-CERT here.

Beyond his article though there was an interesting exchange on Twitter which I will paste below.


1

 


 

 

2


 

3

 


 

In the exchange we can see that Mr. Wagner makes the argument “what else could it have been? Seriously”. This is simultaneously a burden of proof fallacy requiring Blake or myself to provide evidence disproving his theory as well as an argument from personal incredulity. An argument from personal incredulity is a type of informal fallacy where a person cannot imagine how a scenario or statement could be true and therefore believes it must be false. Mr. Wagner took my request for proof of his claim as absurd because he believed that there was no other rational explanation for the blackouts other than a cyber attack.

I would link to the tweets directly but after my last question requesting proof Mr. Wagner blocked Blake and me.

Conclusion

Daniel Wagner is not the only person to write using informal fallacies. We all do it. The trick is to identify it and try to avoid it. I did not feel my request for proof ended up being a fruitful exchange with the author but that does not make Mr. Wagner a bad person. Everyone has bad days. It’s also entirely his right not to continue our discussion. The most important thing here though is to understand that there are a lot of baseless claims that make it into mainstream media that misinform the discussion on topics such as SCADA and critical infrastructure security. Unsurprisingly they often come from individuals without any experience in the field to which they are writing about. It is important to try to identify these claims and learn from them. One effective method is to look for fallacies and inconsistencies. Of course, always be careful to not be so focused on identifying fallacies that you dismiss the claim too hastily. That would be a great example of an argument from fallacy, also known as the fallacy fallacy, where you analyze an argument and because it contains a fallacy you conclude it must be false. Mr. Wagner’s claims are not false because of how they were presented. The claims were not worth considering because of the lack of evidence, the fallacies just helped draw attention to that.

Article on German Steel Mill Attack “Inside Job” is Just Hype

September 10, 2015

On Sept 9, 2015 an article posted on Industrial Safety and Security Source ran with the headline “German Steel Mill Attack: An Inside Job”. The first statement of the article states “The attack on a German steel mill last year was most likely the result of an inside job” – this is blatantly false. There simply isn’t a lot known about the attack as Germany’s BSI have never publicly released follow on details about the incident after their 2014 report nor has the victim. There is no way to assert any theory as being “most likely” and unfortunately this article used what has become a common practice in our industry – anonymous sources and other false reports.

As an example, the article tries to link Havex to the incident without any proof. In talking about the malware it states: “Trojanized software acted as the infection vector for the Havex virus, according to sources interviewed by ISSSource.” The problem with this is that there was no reason to use anonymous sources here or act like this is a revelation. It is well known through the analysis of actual samples that Havex used trojanized versions of software. No sources are needed to reveal what is revealed by a simple Google search. But it makes the story seem more mysterious and tantalizing to use such sources.

The article then compares the steel mill attack to Shamoon claiming that “former CIA officials” claimed the attack was also an insider threat. It directly follows this by stating that an anonymous congressional source had a suspicion that an employee at the steel mill on the payroll of a foreign government was responsible. Following this as a quote from an individual stating “I have heard indirectly that the attack was attributed to the Russians.” And of course following this there is talk of “cyber war”, more anonymous CIA officials, and references to the cyber attack on the BTC Turkey pipeline in 2008 – which the author of the article failed to realize has already been proven as a false story. Reports need to realize that incident responders with first hand experience, engineers and personnel on site during the incident, or people directly involved are impressive sources – “senior intelligence officials” or “congressional sources” are far from impressive – they are some of the most removed personnel from what happened, especially in an ICS incident, and thus the information they have is very likely unreliabile.

Statements in the article such as an anonymous CIA official stating “In all likelihood, it was the Havex RAT that was the culprit, but we lack proof of this” should have killed this article before it went to print. Anytime all the sources in an article are using gossip, rumors, and admitting there is no proof – it’s time to realize there’s nothing to publish.

The reason for me writing this blog about this though is that there are a large number of people in the industrial control system community who will read this article. In many cases it will be their only insight into this story. And not only will they be misinformed about the steel mill incident but they’ll also now believe the BTC pipeline story was real. There is a real threat in industrial control system community – the threat of hyped out and false news stories misleading the community.

Hype has a lasting effect on any community. In the ICS community it pushes folks to not want to share data that we need to understand the threat in fear that it will be twisted into click bait. It encourages organizations to invest resources against threats that aren’t real instead of the numerous well documented threats and issues in the community that do need invested in. It forces non-security community members to tune out everyone in the security community when they realize they’ve been misinformed. The ISSSource article is not the first or worst case of hype seen in the community but it is another addition in a long line of articles that threatens good discussion, measured responses, and proper resource investments that could otherwise help ensure the safety, reliability, and security of infrastructure.

 

Barriers to Sharing Cyber Threat Information Within the Critical Infrastructure Community

June 28, 2015

This was first posted on the Council of Foreign Relations’ blog Net Politics here.

 

The sharing of cyber threat data has garnered national level attention, and improved information sharing has been the objective of several pieces of legislation and two executive orders. Threat sharing is an important tool that might help tilt the field away from adversaries who currently take advantage of the fact that an attack on one organization can be effective against thousands of other organizations over extended periods of time. In the absence of information sharing, critical infrastructure operators find themselves fighting off adversaries individually instead of using the knowledge and experience that already exists in their community. Better threat information sharing is an important goal, but two barriers, one cultural and the other technical, continue to plague well intentioned policy efforts. Failing to meaningfully address both barriers can lead to unnecessary hype and the misappropriation of resources. Continue Reading

Closing the Case on the Reported 2008 Russian Cyber Attack on the BTC Pipeline

June 27, 2015

This was first posted on the SANS ICS blog here.

 

An article released today in Sueddeutsche (the largest German national daily newspaper) by Hakan Tanriverdi revealed new information that further cast doubt on a report of a 2008 Russian cyber attack which caused the Baku-Tbilisi-Ceyhan (BTC) pipeline explosion. The Sueddeutsche article can be found here.

Background:

The original report of the attack was released on December 14th, 2014 with the title “Mysterious ’08 Turkey Pipeline Blast Opened New Cyberwar” by Bloomberg. The article referenced an explosion that occurred in 2008 along the BTC pipeline that had previously been attributed to a physical attack by Kurdish extremists in the area. The Bloomberg report cited four anonymous individuals familiar with the incident and claimed the explosion was actually due to a cyber attack. The attribution to the attack was pointed at Russia. Continue Reading