Monthly Archives

May 2016

Hype, Logical Fallacies, and SCADA Cyber Attacks

May 25, 2016

For a few years now I’ve spent some energy focusing on calling out hyped up stories of threats to critical infrastructure and dispelling false stories about cyber attacks. There have been a lot of reasons for this including trying to make sure that the investments we make in the community go against real threats and not fake ones. This helps ensure we identify the best solutions for our problems. One of the chief reasons though is that as an educator both as an Adjunct Lecturer in the graduate program at Utica College and as a Certified Instructor at the SANS Institute I have found the effects of false stories to be far reaching. Ideally, most hype will never make it into serious policy or security debates (unfortunately some does though). But it does miseducate many individuals entering this field. It hampers their learning when their goals are often just to grow themselves and help others – and I take offense to that. In this blog I want to focus on a new article that came out on the blog at the Huffington Post titled “The Growing Threat of Cyber-Attacks on Critical Infrastructure” by Daniel Wagner, CEO of Country Risk Solutions.  I don’t want to focus on deriding the story though and instead use the story to highlight a number of informal logical fallacies. Being critical of information presented as facts without supporting evidence is a critical skill for anyone in this field. Using case-studies such as this are exceptionally important to help educate on what to be careful to avoid.

Misinformation

Mr. Wagner’s article starts off simply enough with the premise that cyber attacks are often unreported or under-reported leading the public to not fully appreciating the scope of the threat. I believe this to be very true and a keen observation. However, the article then uses a series of case studies each with factual errors as well as conjecture stated as facts. Before examining the fallacies let’s look at one of the first claims which pertains to the cyber attack on the Ukrainian power grid in December of 2015:

“It now seems clear, given the degree of sophistication of the intrusion, that the attackers could have rendered the system permanently inoperable.”

It is true that the attackers showed sophistication in their coordination and ability to carry out a well-planned operation. A full report on the attacker methodology can be found here. However, there is no proof that the attackers could have rendered that portion of the power grid permanently inoperable. In the two cases where an intentional cyber attack caused physical damage to the systems involved, the German steel works attack and the Stuxnet attack on Natanz, both systems were recoverable. This type of physical damage is definitely concerning but the attackers did not display the sophistication to achieve that type of attack and even if they had there is no evidence to show that the system would be permanently inoperable. It is an improbable scenario and would need serious proof to support that claim.

Informal Logical Fallacies

The next claim though is the most egregious and contains a number of informal logical fallacies that we can use as educational material.

“The Ukraine example was hardly the first cyber-attack on a SCADA system. Perhaps the best known previous example occurred in 2003, though at the time it was publicly attributed to a downed power line, rather than a cyber-attack (the U.S. government had decided that the ‘public’ was not yet prepared to learn about such cyber-attacks). The Northeast (U.S.) blackout that year caused 11 deaths and an estimated $6 billion in economic damages, having disrupted power over a wide area for at least two days. Never before (or since) had a ‘downed power line’ apparently resulted in such a devastating impact.”

This claim led to EENews reporter Black Sobczak to call out the article on Twitter which brought it to my attention. I questioned the author (more on that below) but first let’s dissect this claim as there are multiple fallacies here.

First, the author claims that the 2003 blackout was caused by a cyber attack. This is contrary to what is known currently about the outage and is contrary to the official findings of the investigators which may be read here. What Daniel Wagner has done here is a great example of Onus probandi also known as “burden of proof” fallacy. The type of claim that is made is most certainly not common knowledge and is contrary to what is known about the event. So the claimer should provide proof. Yet, the author does not which puts the burden of finding the proof on the reader and more specifically anyone who would disagree with the claim including the authors of the official investigation report.

Second, Daniel Wagner states that the U.S. government knew the truth of the attack and decided that the public was not ready to learn about such attacks. He states this as a fact again without proof but there’s another type of fallacy that can apply here called the historian’s fallacy. In essence, Mr. Wagner obviously believes that a cyber attack was responsible for the 2003 blackouts. Therefore, it is absurd to him that the government would not also know and therefore the only reasonable conclusion is that they hid it from the public. Even if Mr. Wagner was correct in his assessment, which he is not, he is applying his perspective and understanding today on the decision makers of the past. Or more simply stated, his is using what information he believes he has now and is judging the government’s decision on that information which they likely did not have at the time.

Third, the next claim is a type of red herring fallacy known as the straw man fallacy where an argument is misrepresented to make it easier to argue against. Mr. Wagner puts in quotes that a downed powerline was responsible for the outage and notes that a downed line has never been the reason for such an impact before or since. The findings of the investigation into the blackouts did not conclude that the outages occurred simply due to a downed power line though. The investigators put forth numerous findings which fell into four broad categories: inadequate system understanding, inadequate situational awareness, inadequate tree trimming, and inadequate diagnostic support amongst interconnected partners. Although trees were technically involved in one element it was a single variable in a complex scenario and mismanagement of a difficult situation. In addition, the ‘downed power lines’ mentioned were high energy transmission lines far more important than implied in the argument.

Mr. Wagner went on to use some other fallacies such as the informal fallacy of false authority when he cited, incorrectly by the way, Dell’s 2015 Annual Security Report. He cited the report to state that cyber attacks against supervisory control and data acquisition (SCADA) systems doubled to more than 160,000 attacks in 2014. When this statistic came out it was immediately questioned. Although Dell is a good company with many areas of expertise its expertise and insight into SCADA networks was called into question. Just because an authority is expert in one field such as IT security does not mean they are an expert in a different field such as SCADA security. There have only been a handful of known cyber attacks against critical infrastructure. The rest of the cases are often mislabeled as cyber attacks and are in the hundreds or thousands – not hundreds of thousands. Examples of realistic metrics are provided by more authoritative sources such as the ICS-CERT here.

Beyond his article though there was an interesting exchange on Twitter which I will paste below.


1

 


 

 

2


 

3

 


 

In the exchange we can see that Mr. Wagner makes the argument “what else could it have been? Seriously”. This is simultaneously a burden of proof fallacy requiring Blake or myself to provide evidence disproving his theory as well as an argument from personal incredulity. An argument from personal incredulity is a type of informal fallacy where a person cannot imagine how a scenario or statement could be true and therefore believes it must be false. Mr. Wagner took my request for proof of his claim as absurd because he believed that there was no other rational explanation for the blackouts other than a cyber attack.

I would link to the tweets directly but after my last question requesting proof Mr. Wagner blocked Blake and me.

Conclusion

Daniel Wagner is not the only person to write using informal fallacies. We all do it. The trick is to identify it and try to avoid it. I did not feel my request for proof ended up being a fruitful exchange with the author but that does not make Mr. Wagner a bad person. Everyone has bad days. It’s also entirely his right not to continue our discussion. The most important thing here though is to understand that there are a lot of baseless claims that make it into mainstream media that misinform the discussion on topics such as SCADA and critical infrastructure security. Unsurprisingly they often come from individuals without any experience in the field to which they are writing about. It is important to try to identify these claims and learn from them. One effective method is to look for fallacies and inconsistencies. Of course, always be careful to not be so focused on identifying fallacies that you dismiss the claim too hastily. That would be a great example of an argument from fallacy, also known as the fallacy fallacy, where you analyze an argument and because it contains a fallacy you conclude it must be false. Mr. Wagner’s claims are not false because of how they were presented. The claims were not worth considering because of the lack of evidence, the fallacies just helped draw attention to that.

Detecting the S7 Worm and Similar Capabilities

May 8, 2016

I first posted the following on the SANS ICS blog here

An article came out on May 5th titled “Daisy-chained research spells malware worm hell for power plants and other utilities” with the subtitle of “World’s first PLC worm spreads like cancer”. Having been on the receiving end of sensationalized headlines before I empathize with the authors of the research. Regardless of the headlines, the research, performed by Ralf Spenneberg, Maik Brüggeman, and Hendrik Schwartke, is quality work highlighting different methods of propagation and infections inside of control networks. However, before any alarmism sets in let’s look at this research and what it might mean as well as trivial methods to detect it.

The Worm

The researchers put out a great presentation and paper showing exactly what they did to create and test the worm. In short, the worm impacts Siemens SIMATIC S7-1200v3, although with some modifications the method would not be limited to Siemens or this version of PLC, and utilizes the S7 protocol to propagate without requiring the industrial PC. The authors put forth that the infection routine could be introducing an already manipulated PLC or if introduced through the PC it could then run wild without further use of the system. After a PLC is infected it starts scanning the network for TCP port 102 for the S7 protocol to identify the SIMACTIC PLCs. It scans the network through initiating and closing connections at incrementing IP addresses until it finds its targets and infects them. The researchers put the diagram below in their paper to show the process.

PLC Infection Routine

This research is valuable for a lot of purposes but I want to highlight two reasons I think it’s interesting. First, the worm was written in Structured Text and utilizes the PLC and S7 protocol against itself. The researchers use native commands and functionality which I believe will continually be a theme in ICS intrusions and attacks; there is a lot of value in reducing the reliance on custom code or malware when native functionality serves the purpose. The second reason is that all the communications it puts out should be easily identifiable by anyone watching their ICS network.

Detecting the Worm

The most difficult part of detecting threats in the ICS is getting access to the data. The era of dumb switches and a lackadaisical approach to logging needs to be stamped out; but the remnants of it serve a significant challenge. However, after getting access to the network data there are a lot of opportunities afforded defenders. In the case of the S7 worm it would be trivial to detect for anyone familiar with their ICS network.

In a modification of the Purdue Model for ICS architecture, shown below, we can see that good architecture would require the control network, where the S7 devices would exist, to be segmented off from the Internet and outside connections. Connections to the supervisory level should come through operations support and DMZ networks. All of these are potentially very static but the control devices level should be extremely static. I.e. your PLC shouldn’t be Tweeting anything or updating its Facebook status; it’s easier to spot changes here than it is in an enterprise business network.

purdue

The researcher’s worm requires the S7 protocol and it also requires the scanning of new IP addresses on TCP port 102. Even if the worm is modified to have very quick or very slow scan cycles they should all be easily detected. Many SIMATIC PLC environments will use PROFINET to communicate for device to device communications and rely mostly on S7 for configuration and interaction with the Totally Integrated Automation (TIA) portal. There should be a relatively predictable pattern of the PROFINET and S7 communications. It usually gives ICS networks a “heartbeat” that can be observed with the open source tool Wireshark. See below for a PROFINET network where the tool’s “IO Graph” feature (found under “Statistics” in the toolbar) has been selected. Changes to this observable heartbeat such as large spikes or dips in data should be investigated and would reveal increased scanning or command use in the environment.

Wireshark IO Data

Likewise, again just using Wireshark you could use the “Endpoints” feature (again found under “Statistics” in the toolbar) to identify what IP address and TCP or UDP ports are in use. This method easily identified the TCP port requests that the ICS scanner in HAVEX performed and here it would be trivial to identify connection attempts to numerous IP addresses that do not exist on the PLC network segment. Below is a picture of the same PROFINET capture from above. Your network will look different but ICS networks, especially in the control device level, are much smaller and static than traditional IT networks. Take advantage of that.

Wireshark Endpoints

Do a Health Check

At every ICS on the planet defenders should, as a bare minimum, do health checks of the network. In networks with unmanaged infrastructure put in a tap during a maintenance period and gather packet capture. Even if you only are able to do this once a year you will at least be able to identify issues ranging from misconfigured devices to worms like the one in this research. Utilize free tools such as Wireshark to just be aware of what your ICS looks like. Learn what normal looks like and hunt for weird. You’ll be amazed at what you find.

Sustainable Success

You should make the case for managed infrastructure in your ICS and do port mirroring to gather packet captures continuously. These networks do not have a large amount of data and you can gather a lot of packet capture for a small investment in storage space. From there you can learn what the ICS should look like normally and write whitelist styled intrusion detection system rules. There are many awesome professional tools that can help the security of your ICS but for this example you do not need any “next generation” tools; simply using tools such as Snort, Bro, and FlowBAT inside of the Linux distribution SecurityOnion can return huge value for continuously monitoring your ICS.

Ideally, you will have someone (maybe you!) that can perform the tactics I teach in my ICS515 course such as network security monitoring to constantly hunt for threats in your ICS. But if you do not at least you have the data to routinely go back and see if anything is wrong in the ICS or if something goes wrong you can have the data to determine the root cause and apply the appropriate mitigations. And if you ever call in an incident response team just having this data available will significantly reduce the cost and time associated with the response effort.

One other thing the S7 worm highlights is the communications going on inside the S7 protocol. Many defenders simply do not know what is going on inside their ICS protocols. In fact, many ICS protocols are poorly implemented or understood and its an area where much more research is needed. The community must do better with understanding the ICS protocols. Today, detecting changes is trivial through simple methods like Wireshark Endpoints. But the threats are evolving and defenders need to know what commands are being sent across their ICS network. Today, do health checks. Whenever possible, implement continuous monitoring and network security monitoring methodologies. Moving forward, look to deeply understand the ICS protocols and communications.

Closing Thoughts

The S7 worm is good research that highlights what can be done in the control network. But it’s not a worm from hell nor is it some unstoppable capability. The S7 worm is trivial to detect, but few are looking. The worm does however help raise awareness and for that the researchers deserve a lot of credit. For mitigations, ICS defenders should have copies of the logic running on their control devices digitally hashed and stored off of the network. Remediation with good backups is a much less difficult process than remediation without backups.

Regardless if this specific capability is realistic to your environment or not the takeaways are the same: detecting these types of capabilities only requires a minimal amount of effort once data collection is done. There are a lot of mitigations and defenses you should be putting into place but at the very least monitor your environment. If you are not collecting data from your control level you should be, when you move to monitoring the data you collect you will be significantly contributing to the safety and reliability of the ICS.