Browsing Tag


Analytical Leaps and Wild Speculation in Recent Reports of Industrial Cyber Attacks

December 31, 2016

“Judgement is what analysts use to fill gaps in their knowledge. It entails going beyond the available information and is the principal means of coping with uncertainty. It always involves an analytical leap, from the known into the uncertain.”

– Chapter 4, Psychology of Intelligence Analysis, Richards J. Heuer.


Analytical leaps, as Richards J. Heuer said in his must-read book Psychology of Intelligence Analysis, are part of the process for analysts. Sometimes though these analytical leaps can be dangerous, especially when they are biased, misinformed, presented in a misleading way, or otherwise just not made using sound analytical processes. Analytical leaps should be backed by evidence or at a minimum should include evidence leading up to the analytical leap. Unfortunately, as multiple analytical leaps are made in series it can lead to entirely wrong conclusions and wild speculation. There have been three interesting stories relating to industrial attacks this December as we try to close out 2016 that are worth exploring in this topic. It is my hope that looking at these three cases will help everyone be a bit more critical of information before alarmism sets in.

The three cases that will be explored are:

  • IBM Managed Services’ claim of “Attacks Targeting Industrial Control Systems (ICS) Up 110%”
  • CyberX’s claim that “New Killdisk Malware Brings Ransomware Into Industrial Domain”
  • The Washington Post’s claim that “Russian Operation Hacked a Vermont Utility, Showing Risk to U.S. Electrical Grid Security, officials say”


“Attacks Targeting Industrial Control Systems (ICS) Up 110%”

I’m always skeptical of metrics that have no immediately present quantification. As an example, the IBM Managed Security Services posted an article stating that “attacks targeting industrial control systems increased over 110 percent in 2016 over last year’s numbers as of Nov. 30.” But there is no data in the article to quantify what that means. Is 110% increase an increase from 10 attacks to 21 attacks? Or is it 100 attacks increased to 210 attacks?

The only way to understand what that percentage means is to leave this report and go download the IBM report from last year and read through it (never make your reader jump through extra hoops to get information that is your headline). In their 2015 report IBM states that there were around 1,300 attacks in 2015 (Figure 1). This would mean that in 2016 IBM is reporting they saw around 2,700 ICS attacks.


Figure 1: Figure from IBM’s 2015 Report on ICS Attacks


However, there are a few questions that linger. First, this is a considerable jump from what they were tracking previously and from their 2014 metrics. IBM states that the “spike in ICS traffic was related to SCADA brute-force attacks, which use automation to guess default or weak passwords.” This is an analytical leap that they make based on what they’ve observed. But, it would be nice to know if anything else has changed as well. Did they bring up more sensors, have more customers, increase staffing, etc. as the stated reason for the increase would not alone be responsible.

Second, how is IBM defining an attack. Attacks in industrial contexts have very specific meaning – an attempt to brute-force a password simply wouldn’t qualify. They also note that a pentesting tool on GitHub was released in Jan 2016 that could be used against the ICS protocol Modbus. IBM states that the increase in metrics was likely related to this tools’ release. It’s speculation though as they do not give any evidence to support their claim. However, it leads to my next point.

Third, is this customer data or is this honeypot data? If it’s customer data is it from the ICS or simply the business networks of industrial companies? And if it’s honeypot data it would be good to separate that data out as it’s often been abused to misreport “SCADA attack” metrics. From looking at the discussion of brute-force logins and a pentesting tool for a serial protocol released on GitHub, my speculation is that this is referring mostly to honeypot data. Honeypots can be useful but must be used in specific ways when discussing industrial environments and should not be lumped into “attack” data from customer networks.

The article also makes another analytical leap when it states “The U.S. was also the largest target of ICS-based attacks in 2016, primarily because, once again, it has a larger ICS presence than any other country at this time.” The leap does not seem informed by anything other than the hypothesis that the US has more ICS. Also, again there is no quantification. As an example, where is this claim coming from, how much larger is the ICS presence than other countries, and are the quantity of attacks proportional to the US ICS footprint when compared to other nations’ quantity of industrial systems? I would again speculate that what they are observing has far more to do with where they are collecting data (how many sensors do they have in the US compared to China as an example).

In closing out the article IBM cites three “notable recent ICS attacks.” The three case studies chosen were the SFG malware that targeted an energy company, the New York dam, and the Ukrainian power outage. While the Ukrainian power outage is good to highlight (although they don’t actually highlight the ICS portion of the attack), the other two cases are poor choices. As an example, the SFG malware targeting an energy company is something that was already debunked publicly and would have been easy to find prior to creating this article. The New York dam was also something that was largely hyped by media and was publicly downplayed as well. More worrisome is that the way IBM framed the New York dam “attack” is incorrect. They state: “attackers compromised the dam’s command and control system in 2013 using a cellular modem.” Except, it wasn’t the dam’s command and control system it was a single read-only human machine interface (HMI) watching the water level of the dam. The dam had a manual control system (i.e. you had to crank it to open it).

Or more simply put: the IBM team is likely doing great work and likely has people who understand ICS…you just wouldn’t get that impression from reading this article. The information is largely inaccurate, there is no quantification to their numbers, and their analytical leaps are unsupported with some obvious lingering questions as to the source of the data.


“New Killdisk Malware Brings Ransomware Into Industrial Domain”

CyberX released a blog noting that they have “uncovered new evidence that the KillDisk disk-wiping malware previously used in the cyberattacks against the Ukrainian power grid has now evolved into ransomware.” This is a cool find by the CyberX team but they don’t release digital hashes or any technical details that could be used to help validate the find. However, the find isn’t actually new (I’m a bit confused as to why CyberX states they uncovered this new evidence when they cite in their blog an ESET article with the same discovery from weeks earlier. I imagine they found an additional strain but they don’t clarify that). ESET had disclosed the new variant of KillDisk being used by a group they are calling the TeleBots gang and noted they found it being used against financial networks in Ukraine. So, where’s the industrial link? Well, there is none.

CyberX’s blog never details how they are making the analytical leap from “KillDisk now has a ransomware functionality” to “and it’s targeting industrial sites.” Instead, it appears the entire basis for their hypothesis is that Sandworm previously used KillDisk in the Ukraine ICS attack in 2015. While this is true, the Sandworm team has never just targeted one industry. iSight and others have long reported that the Sandworm team has targeted telecoms, financial networks, NATO sites, military personnel, and other non-industrial related targets. But it’s also not known for sure that this is still the Sandworm team. The CyberX blog does not state how they are linking Sandworm’s attacks on Ukraine to the TeleBots usage of ransomware. Instead they just cite ESET’s assessment that the teams are linked. But ESET even stated they aren’t sure and it’s just an assessment based off of observed similarities.

Or more simply put: CyberX put out a blog saying they uncovered new evidence that KillDisk had evolved into ransomware although they cite ESET’s discovery of this new evidence from weeks prior with no other evidence presented. They then make the claim that the TeleBots gang, the one using the ransomware, evolved from Sandworm but they offer no evidence and instead again just cite ESET’s assessment. They offer absolutely no evidence that this ransomware Killdisk variant has targeted any industrial sites. The logic seems to be “Sandworm did Ukraine, KillDisk was in Ukraine, Sandworm is TeleBots gang, TeleBots modified Killdisk to be ransomware, therefore they are going to target industrial sites.” When doing analysis always be aware of Occam’s razor and do not make too many assumptions to try to force a hypothesis to be true. There could be evidence of ransomware targeting industrial sites, it does make sense that they would eventually. But no evidence is offered in this article and both the title and thesis of the blog are completely unfounded as presented.


“Russian Operation Hacked a Vermont Utility, Showing Risk to U.S. Electrical Grid Security, officials say”

This story is more interesting than the others but too early to really know much. The only thing known at this point is that the media is already overreacting. The Washington Post put out an article on a Vermont utility getting hacked by a Russian operation with calls from the Vermont Governor condemning Vladimir Putin for attempting to hack the grid. Eric Geller pointed out that the first headline the Post ran with was  “Russian hackers penetrated U.S. electricity grid through utility in Vermont, officials say” but they changed to “Russian operation hacked a Vermont utility, showing risk to U.S. electrical grid, officials say.” We don’t know exactly why it was changed but it may have been due to the Post overreacting when they heard the Vermont utility found malware on a laptop and simply assumed it was related to the electric grid. Except, as the Vermont (Burlington) utility pointed out the laptop was not connected to the organization’s grid systems.

Electric and other industrial facilities have plenty of business and corporate network systems that are often not connected to the ICS network at all. It’s not good for them to get infected, and they aren’t always disconnected, but it’s not worth alarming anyone over without additional evidence.  However, the bigger analytical leap being made is that this is related to Russian operations.

The utility notes that they took the DHS/FBI GRIZZLY STEPPE report indicators and found a piece of malware on the laptop. We do not know yet if this is a false positive but even if it is not there is no evidence yet to say that this has anything to do with Russia. As I pointed out in a previous blog, the GRIZZLY STEPPE report is riddled with errors and the indicators put out were very non-descriptive data points. The one YARA rule they put out, which the utility may have used, was related to a piece of malware that is publicly downloadable meaning anyone could use it. Unfortunately, after the story ran with its hyped-up headlines Senator Patrick Leahy released a statement condemning the “attempt to penetrate the electric grid” as a state-sponsored hack by Russia. As Dimitri Alperovitch, CTO of CrowdStrike who responded to the Russian hack of the DNC, pointed out “No one should be making attribution conclusions purely from the indicators in the USCERT report. It was all a jumbled mess.”

Or more simply put: a Vermont utility acted appropriately and ran indicators of compromise from the GRIZZLY STEPPE report as the DHS/FBI instructed the community to do. This led to them finding a match to the indicator on a laptop separated from the grid systems but it’s not yet been confirmed that malware was present. The Vermont Governor Peter Shumlin then publicly chastised Vladimir Putin and Russia for trying to hack the electric grid. U.S. officials then inappropriately gave additional information and commentary to the Washington Post about an ongoing investigation which lead them to run with the headline that the this was a Russian operation. After all, the indicators supposedly were related to Russia because the DHS and FBI said so – and supposedly that’s good enough. Unfortunately, this also led a U.S. Senator to come out and condemn Russia for state-sponsored hacking of the utility.

Closing Thoughts

There are absolutely threats to industrial environments including ICS/SCADA networks. It does make sense that ICS breaches and attacks would be on the rise especially as these systems become more interconnected. It also makes perfect sense that ransomware will be used in industrial environments just like any other environment that has computer systems. And yes, the attribution to Russia compromising the DNC is very solid based on private sector data with government validation. But, to make claims about attacks and attempt to quantify it you actually have to present real data and where that data is coming from and how it was collected. To make claims of new ransomware targeting industrial networks you have to actually provide evidence not simply make a series of analytical leaps. To start making claims of attribution to a state such as Russia just because some poorly constructed indicators alerted on a single laptop is dangerous.

Or more simply put: be careful of analytical leaps especially when they are made without presenting any evidence leading into them. Hypotheses and analysis requires evidence else it is simply speculation. We have enough speculation already in the industrial industry and more will only lead to increasingly dangerous or embarrassing scenarios such as a US governor and senator condemning Russia for hacking the electric grid and scaring the public in the process when we simply do not have many facts about the situation yet.

New Suspected Cyber Attack on Ukraine Power Grid – Advice as Information Emerges

December 19, 2016

Reporting in Ukraine has emerged indicating another suspected cyber attack on the electric grid (the first being the confirmed one in 2015). Initial reporting is often inaccurate or a small view of incidents but it’s worth cautiously watching and seeing what information emerges. Here’s what we know so far:

Reports of Suspected Cyber Attack:
Around noon of December 19th, 2016 reports began to surface related to a possible cyber attack on the Ukraine electric grid. The attack is suspected to have taken place near midnight local Ukraine time on the 17th. The Pivnichna transmission-level substations have been called out as possibly being the site attacked.  This is of course concerning for numerous reasons including the cyber attack on the Ukraine grid in December 2015 as well as traditional ongoing military actions in Ukraine. The reporting is from various Ukrainian sources including a press release from the impacted company Kyivenergo confirming that there was an unintentional outage and that they took actions to restore operations.

The first 24 and often 48 hours of reporting are notoriously bad for OSINT analysts but still should be utilized. Simply leverage caution and do not present information as facts yet. At this point I would assess with low confidence that the cyber attack has occurred. This is not to say there is doubt around the event only that there are other theories that have equal weighting until more evidence is available.  However, based on the sourcing of the information (internal Ukraine sources) and the Ukrainian grid operators’ experience dealing with a similar situation last year I have a higher trust level of the sources (thus the low confidence assessment that the attack is real). We will learn more later and it may be revealed that the outage was not related to a cyber attack; however I am aware of an investigation on going by Ukrainian authorities and they are treating the leading theory for the outage as a cyber attack.  I will caution again though that no one with direct knowledge of the attack has confirmed that it is a cyber attack; only that it is the leading theory and the disconnect was unintentional.

What Should Be Done:
Right now the best actions for those not on the ground or working at infrastructure companies is to wait and see if more information is revealed. Journalists should be cautious to infer or jump to conclusions and those in security community should stay tuned for more information. I would recommend journalists contact sources in the area but realize that the information is very preliminary and those not on the ground in Ukraine will have very little to add to knowledge on the situation.

If you are in the infrastructure (ICS/SCADA) security community it would be wise to use established channels to send decision makers a situational awareness report on the news; I would note it’s a low confidence assessment currently due to lack of first hand evidence but that it is a situation worth watching. This should be paired with security staff taking an active defense posture of monitoring the ICS network looking for abnormal activity. Preliminary information from the investigation underway by the Ukrainian authorities indicates that a remote attack is suspected.  I would stay far away from linking this to the Sandworm attack currently (attribution right now is not possible) but I would review the methods they achieved the remote attack on Ukraine last year and use that information to hunt for threats. As an example, look in logs for abnormal VPN session length, increased frequency of use, and unusual connection requests times.

If you happen to be a customer of Dragos, Inc. you will have received a notification already with some recommendations for strategic, operational, and tactical level players. Check your portal and be on the look out for a briefing request coming from us if you would like to attend remotely. For the wider community ensure that you are wary of phishing attempts taking advantage of this possible attack.

In Closing:
My chief recommendation is for everyone to avoid alarmism and utilize this as an opportunity to review logs and information from the ICS and search TTPs we’ve seen before such as remote usage of the ICS through legitimate accounts, VPNs, and remote desktop capabilities. If this attack turns out to be true it is unlikely it will be anything that is novel that couldn’t have been detected. It’s important to remember that defense is doable – now go do it.

A Collection of Resources for Getting Started in ICS/SCADA Cybersecurity

August 28, 2016

I commonly get asked by folks what approach they should take to get started in industrial control system (ICS) cybersecurity. Sometimes these individuals have backgrounds in control systems, sometimes they have backgrounds in security, and sometimes they are completely new to both. I have made this blog for the purpose of documenting my thoughts on some good resources out there to pass off to people interested. Do not attempt to do everything at once but it’s a good collection to refer back to in an effort to polish up skills or learn a new industry. There are also many skills that may not immediately be relevant to your job but I believe these topics all work together (ranging from analysis of threats to understanding the physical process of a gas turbine).  Rest assured, no matter how ill prepared you might feel in getting started realize that by having the passion to ask the question and start down the path you are already steps ahead of most. We need passionate people in the industry; everything else can be taught.

Optional Pre-Reqs

It’s always good to pick up a few skills regarding the fundamentals of computers, networks, and systems in general. I would recommend trying to pick up a scripting language as well; even if you don’t find yourself scripting a lot understanding how scripting works will add a lot of value to your skill set.

  • Learn Python the Hard Way
    • Learn Python the Hard Way is a great free online resource to teach you, step-by-step, the Python scripting languages. There’s a lot of different opinions about different scripting language. In truth, most of them have value in different situations so I’ll leave it to you to pick your own language (and I won’t tell you that you’re wrong for not learning Python, even though you are). Another good programming resource is Code Academy.
  • MIT Introduction to Computer Programming
    • MIT’s open courseware is a treasure for the community. It shocks me how many people do not take advantage of free college classes from top universities. This is the Introduction to Computer Science and Programming course. It should be taken at a slow pace but it’ll give you a lot of fundamental skills.
  • MIT Introduction to Electrical Engineering and Computer Science
    • Another MIT open course but this time focused on electrical engineering. This is a skill that will help you understand numerous types of control systems better as well as have a better grasp on how computers work.
  • Microsoft Virtual Academy
    • Microsoft Virtual Academy can be found at various locations on YouTube. I have linked to the first one; I would recommend browsing through the topic list for everything from fundamentals of networking, to fundamentals of computers, to how the Internet works.

Intro to Control Systems

Control systems run the world around us. Escalators, elevators, types of medical equipment, steering in our cars, and building automation systems are types of control systems you interact with daily. Industrial control systems (ICS) are industrial versions of control systems found in locations such as oil drilling, gas pipelines, power grids, water utilities, petrochemical facilities, and more. This section will go over some useful resources and videos to learn more about industrial control systems.

  • The PLC Professor
    • PLC Professor and his website contains a lot of great resources for learning what programmable logic controllers (PLCs) and other types of control systems and their logic are and how they work. Some resources are free while others are paid. At some point, getting a physical kit as a trainer to learn on is going to be a requirement.
  • Control System Basics
    • This is a great video explaining control system basics including the type of logic these systems use to sense and create physical changes to take action upon.
  • What is SCADA?
    • You’ve no doubt heard the term SCADA, if you haven’t you will. It stands for Supervisory Control and Data Acquisition and is a type of ICS. This video is a nice basic approach to explaining SCADA.
  • Department of Energy – Energy 101
    • The Department of Energy has a series of Energy 101 videos to explain basic concepts of different types of energy generation, sources, etc. It’s a fantastic series that should excite you about the field while explaining key terms and concepts.
  • Wastewater Treatment Explanation Video
    • We all need wastewater treatment facilities and learning about them helps you understand how control systems work and just how complex simple tasks in life can be (if we didn’t have control systems). These types of videos are important for you to watch and learn so that you get exposed to different industries. ICS is not really a community, it’s a collection of communities.
  • Waste Water – Flush to Finish
    • Another good wastewater explanation video.
  • Refinery Crude Oil Process
    • This is a video explaining a refinery crude oil process. If these types of videos don’t excite you to some extent you may be in the wrong career field. The world around us is magnificent and learning different industries will start to help you ask the right questions which will lead to your education on the subject.
  • Natural Gas Processing
    • This is an older video (the industry has definitely become more advanced than represented here) but extremely interesting on how natural gas is harvested, processed, and transferred. Think about all the control systems that have to go into this seeminly simple process.
  • How a Compressor Station Works
    • One particularly interesting (and historically difficult to secure) portion of the ICS community is the natural gas pipeline. This video talks about natural gas to some extent but really focused on compressor stations. Compressor stations as remote sites offer numerous opportunities and challenges to defenders. In short – they’re pretty cool.
  • Chemical Engineering YouTube Channel
    • A great series of videos explaining and showing different components of chemical processing.
  • Steel from Start to Finish
    • This is an example of how steel is made. The video, like the others in this section shows an important process that can help you understand all that goes into control system security. It’s important to know the real world impacts and applications of the processes we are trying to defend to fully understand how important safety and reliability are as the main component of industrial automation.
  • How It’s Made: Uranium Part 1 and Part 2
    • Uranium mining is especially important for the nuclear power industry. There’s a lot of misconceptions around uranium and its mining; many aspects of this type of mining are similar to other types of mining but the purification, transportation, manufacturing, and utilization of uranium (highlighted in part 2 of the videos above) are particularly interesting and unique. There’s an amazing amount of industrial control systems involved in these processes.
  • Uranium Mining
    • There are multiple ways to perform uranium mining, here is an alternative way with a video by the Nuclear Energy Institute.
  • Nuclear Reactor Explained
    • This is a simplistic but extremely easy to digest explanation and animation of a nuclear reactor. Nuclear energy has a bad rap due to pop culture but is a highly clean and safe form of energy. It’s really useful to understand this process and how these systems are designed and, ideally, isolated.
  • Nuclear Power Station
    • Building from the last video, here’s another video diving deeper into nuclear power. What you should focus on here is the design and engineering that go into the safety systems. Safety systems can be bypassed, there are no ‘unhackable’ things, but this helps you to understand just how these systems are designed to be safe by default even if not build with security in mind. The Fukushima event can be observed as a worst case and extremely unlikely scenario. Learning from it will be important; here you’ll find a good video on it.
  • Thermal Power Plant
    • There are many ways to generate power; this video explains thermal power and the complexity of the environment.
  • SCADA Utility 101
    • Rusty Williams has just the right type of southern speaking which makes an audience want to learn more. The guy is awesome, the video explains SCADA from an electric utility perspective, and this is a much watch.
  • Electric Generation and Transmission
    • Didn’t get enough of Rusty? Here’s another video of him explaining the generation and transmission of electricity.
  • Copper Mining
    • There are many differences in mining depending on what you are mining, but much of the fundamentals of exploration, extraction, and processing is similar across numerous industries. This video on copper mining, skip to about 1:30 to get past the specific mine’s financials and marketing, gives a nice quick high level view of some of the process and equipment you’d find in the mining industry.
  • Gold Mining
    • Whereas the initial mining fundamentals can be the same, as noted there are many differences including how you achieve prospecting and how you process the extracted minerals. Gold mining has a number of interested aspects worth learning about.
  • Cyanidation for Extraction Processes (Animated Video and a Real Life Example)
    • Cyanide is mostly known for its form as hydrogen cyanide but in other forms (such as sodium, potassium, or calcium cyanide) it is useful in extracting precious minerals from ore and often used in gold processing. The videos above are quick animated and real life examples of the cyanidation process. The Wikipedia article here is also very useful.
  • Fundamentals of Manufacturing Processes
    • Manufacturing makes the world around us. The manufacturing industry is broad from auto, to food and beverage, to chemical, to pharmaceutical, and more. This is an MIT course that’s hosted online for free. It’s a 10 week course but it is fantastic and going through a wide variety of types of manufacturing.
  • Chemical Industry Process Equipment
    • This video is unlike the others in that it does not really show the full engineering process. However, the video talks through a wide variety of equipment that you would find in the chemical industry. I find this video useful to learn about a variety of equipment, much of which you could find in numerous industries. I would recommend taking terms you’re unfamiliar with and looking up Wikipedia articles for each after the video.
  • Beverage Manufacturing (Coca-Cola)
    • Here’s a great example of a manufacturing video focused on beverages, in this case Coca-Cola. The food and beverage industry and its manufacturing processes are wonderful forms of batch processing. This video is obviously a bit of a promotion as well but there’s great explanations throughout the video including how to make bottles (800 bottles a minute!), how to make cans, how to clean cans with sulfuric acid, and of course how to fill them with coke (1,700 cans per minute!).
  • Control Lectures
    • This is a fantastic series by Brian Douglas which covers a wide range of lectures on control systems in a very easy to process way.
  • Safety Systems
    • It’s good to get familiar with safety systems as well. Safety systems can either be active or passive. As an over simplification think of these as systems that take control of the system when an unsafe event occurs and helps to regulate it or shut it down safely. It can also be the product of good engineering instead of a dedicated system. Either way, there is a trend in the community to have integrated safety systems into one device; where the control device is also the safety device. This has cost savings but horrendous cyber security consequences and thus horrible safety consequences.
  • Safety Valves
    • Building on your understanding now of safety systems here’s an example of a safety valve in a process and how it can work to keep the operations, and more importantly the people around it, safe.
  • Industrial Disaster Explanation Videos
    • The U.S. Chemical Safety and Hazard Investigation Board has a number of videos explaining industrial disasters. This is an important resource to understand what can go wrong in industrial automation regardless of the cause (these are not cyber related but are important to understand as things that cyber could potentially cause if we are not careful). In IT, if things go wrong people do not generally die – in ICS death, injury, and environmental harm is a very real concern.

Intro to Computer and Network Security

There’s a lot of resources in the form of papers below (especially the SANS Reading Room) which are all great. However, you really need to get hands on so many of the resources are focused on tools and data sets. Try to read up as much as possible and then deeply dive into hands on learning.

  • The Sliding Scale of Cyber Security
    • I wrote this paper specifically to address the nebulous nature of “cyber security.” When people say they specialize in cyber security, what exactly does that mean? I put forth that there are 5 categories of investment that can be made. The prioritization for the value towards security should be towards the left hand side of the scale. It is ok to invest in multiple categories at once but understand the true return on investment you’re getting versus the cost.
  • VMWare
    • You’ll want to be able to set up Virtual machines (VMs) to get hands on with files and various security tools. VMWare is a great choice as is VirtualBox. VMWare has a free version you’ll want to use (Player). Don’t worry about getting Workstation or Player Pro until later when you are more experienced and want to save snapshots (copies of your VM to revert back to). Below you’ll find a sample video on VMs, feel free to Google around for better understanding.
  • Security Onion
    • You’re going to want to get hands on with the files presented in this guide; Security Onion is an amazing collection of free tools to do just that with a focus on network security monitoring and traffic analysis.
    • If you’re super cool you’ll want to get into forensics at some point; the SIFT VM from SANS is a collection of tools you’ll need to get started.
  • REMnux
    • Before you try out reverse engineering malware (REM) you’ll want to have a safe working environment to do so. This is not a beginner topic but at some point you’ll likely want to examine malware, Lenny’s REMnux VM is the safe place to do that.
  • Malware Traffic Analysis
    • Brad’s blog on malware traffic analysis is one of the best resources in the community. It combines sample files with his walk throughs of what they are and how to deal with them. You can learn a lot this way very quickly.
  • Open Security Training
    • This website is dedicated to open (free) security training. There are a number of qualified professionals who have dedicated time to teach things from the basics of security to advanced reverse engineering concept. You could spend quite a time on this website’s courses and all of them would make you more capable in this field. There are often full virtual machines (VMs), slides, and videos for the courses.
  • Sample PCAPs from NETRESEC
    • These packet capture samples are invaluable to learning how our systems interact on the network. Take a tool like Wireshark and analyze these files to get familiar with them and the practice (Wireshark will continually be your friend in any field you specialize in).
  • DEFCON Capture the Flag Files
    • DEFCON has made available their files (and often times walkthroughs) for their capture the flag contests. These range from beginner to advanced concepts in offensive security practices such as red teaming. Learning how to break into systems and how they fail is great for defense. It’s not required but it can be helpful.
  • Iron Geek
    • This is an invaluable collection of videos from conferences around the community. If you’re looking for a specific topic it’s a good idea to search these conference videos. Felt like you missed out on the last decade of security? Don’t worry most of its captured here.
  • SANS Reading Room
    • The SANS Institute is the largest and most trusted source of cyber security training. Their Reading Room is a free collection of papers written by students and instructors covering almost every topic in security.
  • Krebs on Security
    • Krebs puts together a great blog doing quality investigative research on breaches, incidents, and cyber security topics that are newsworthy. While doing your self-education keep an eye out for breaking and exciting stories.
  • Honeynet Project
    • Consider this a capstone exercise. Read up on honeypots and learn to deploy a honeypot such as Conpot. The idea is that to run a honeypot correctly you’ll have to learn about safeguarding your own infrastructure, setting up proxies and secure tunnels, managing cloud based infrastructure such as an EC2 server, performing traffic analysis on activity in the honeypot, malware analysis on discovered capabilities, and eventually incident response and digital forensics off of the data provided to explore the impact to the system. Working up to this point and then running a successful honeypot for any decent length of time really helps develop and test out a wide range of skills in the Architecture, Passive Defense, Active Defense, and (potentially in the form of Threat Intel) Intelligence categories of the Sliding Scale of Cyber Security.

Intro to Control System Cyber Security

Cybersecurity is not a new topic but in ICS it is mostly unexplored. The hardest part for most folks is learning who to listen to and what resources to read. There are a lot of “experts” out there who will quickly lead you astray; look at people’s resumes to see if they had the opportunity to do what they are speaking to you about. Because they don’t have experience doesn’t mean they are necessarily wrong but it’s an easy check. As an example, if someone calls themselves a “SCADA Security Guru” or something like a “thought leader” but they’ve only ever been a Chief Marketing Officer of an IT company, that should be a red flag. It is important to be very critical of information in this space but continually push forward to try to make the community better. Below are some trusted resources to help you on your journey.

  • An Abbreviated History of Automation and ICS Cybersecurity
    • This is a great SANS paper looking at the background on ICS cybersecurity. Well worth the read to make sure you understand many of the events that have occurred over the past twenty years and how they’ve inspired security in ICS today.
  • SANS ICS Library
    • This is the SANS ICS library which contains a number of posters and papers to get you started. Reference the blog as well for good explorations of topics. I write the Defense Use Case series as well which explores real and hyped up ICS attacks and lessons learned from them.
  • SCADAHacker Library
    • Joel has a fantastic collection of papers on ICS security, standards, protocols, systems, etc. Lots of valuable content in this collection.
  • The ICS Cyber Kill Chain
    • The attacks we are concerned most with on ICS take a different approach than traditional IT. This is a paper I wrote with Michael Assante exploring this and detailing the steps an adversary needs to take to accomplish their goals.
  • Analyzing Stuxnet (Windows Portion)
    • This is Bruce Dang’s talk at the 27th CCC in Germany on his exploration of analyzing Stuxnet. He was at Microsoft and was one of the first researchers to analyze it. This is a good understanding of the Windows portion of analysis. I show this video even though it’s a bit more advanced to highlight that there are often an IT and (operations technology) OT side of analysis.
  • Analyzing Stuxnet (ICS Portion)
    • Ralph Langer was responsible for deep diving into Stuxnet on it’s ICS payload portion. This talk gives a good understanding of the OT side of the analysis.
  • To Kill a Centrifuge – Stuxnet Analysis
    • This is Ralph Langer’s excellent paper exploring the technical details on the Stuxnet malware and most importantly the ICS specific payload and impact. It is a good idea to read through the paper and Google the terms in the paper you do not understand.
  • SANS ICS Defense Use Case #5 – Ukraine Power Grid Attack
    • This is a paper I wrote with Michael Assante, and Tim Conway released through the E-ISAC on our analysis of the Ukraine power grid attack in 2015. There are also recommendations for defense at each level of the ICS kill chain (applying 1 control is never enough to stop attacks).
  • CRASHOVERRIDE – Analysis of the Threat to Electric Grid Operations
    • Following the attack on Ukraine’s grid in 2015, there was an effort by the adversary to make their efforts more scalable with the added automation of malicious software. The malware leveraged in the Ukraine 2016 cyber attack (second ever cyber attack to cause loss of load in an electric system) was called CRASHOVERRIDE.
  • Anatomy of an Attack: Detecting and Defeating CRASHOVERRIDE
    • Much of the information around CRASHOVERRIDE wasn’t made immediately available to the sensitivity of what happened and the desire to not have the tradecraft proliferate. Once more information was being made known though Joe Slowik, an intelligence analyst at Dragos, published the findings behind the adversary ELECTRUM which was responsible for CRASHOVERRIDE.
  • TRISIS Malware: Analysis of Safety System Targeted Malware
    • In 2017 there was an attack on a Saudi Arabian petrochemical company. Dragos and FireEye completed analyses of the malware (FireEye called it TRITON and Dragos called it TRISIS; they did not coordinate with each other and did not know each firm was working on the malware analysis until a week or so before publication). Despite initial reporting in media by parties not involved in the analysis, Saudi Aramco was not the victim of the attack. Saudi Aramco was actually the incident response team that went and helped out the facility.
  • The Industrial Cyber Threat Landscape
    • This is the testimony I gave in 2018 to the Committee on Energy and Natural Resources of the United States Senate. It contains recommendations for the community and a discussion of the cyber threat landscape.
  • ICS Threat Intelligence: Moving from the Unknowns to a Defended Landscape
    • This is a talk I did at the SANS ICS Summit that gets into why our threat landscape is largely unknown, what we can do about it, and how we can really move the community forward by incorporating intelligence instead of theoretical best practices.
  • Perfect ICS Storm
    • Glenn wrote a great paper looking at the interconnectivity of ICS and the networks around them with considerations on how it impacts monitoring and viewing the control systems.
  • Network Security Monitoring in ICS 101
    • Here is a great intro talk on network security monitoring in an ICS by Chris Sistrunk at DEFCON 23. Network security monitoring is exceptionally useful in ICS because it can be done with minimal data sets and passively which works inside the confines of the safety and reliability requirements of an ICS network.
  • Dragos Webinars and Blogs
    • The Dragos webinars and blogs are highly informative on performing threat analysis, defense, and response as it pertains to ICS cyber threats. They are very rarely marketing or promotional and far more content-driven.
  • S4 Videos
    • The S4 conference run by Dale Peterson is a great community resource. He has posted a number of the conference presentations which will give you a great look at the ICS security community especially from the researcher perspective.
  • Defense Will Win
    • Dale Peterson’s excellent S4 talk that has an upbeat attitude of “defense will win.” This is something I completely agree with and for a few years now I have been championing the phrase “Defense is Doable” to help folks not get down when it comes to ICS cyber security. It may seem like the hardest challenge out there but it’s worthwhile and these are the most defensible environments on the planet; maybe not the most defended – but we will get there.
  • Dragos Year in Review 2017 and 2018
    • Each year Dragos puts out a year in review that covers threats, vulnerabilities, and lessons learned across incident response and assessments. They were made to provide ground-truth base metrics and stats to the community about what is going on around the community. They are light on marketing language and focused on sharing insights useful to the community.

Recommended ICS Cybersecurity Books

  • Rise of the Machines: A Cybernetic History
    • It seems a bit odd to put a non-technical book as my first recommendation but I assure you it is with reason. Dr. Thomas Rid wrote this book to attempt to fully understand the history, implications, and usages of the word “cyber”. Delightfully, control systems have a major role throughout the book. It was control systems that got us started with “cybernetics” which is eventually where we would have the “cyber” word that fills our daily lives.
  • Handbook of SCADA/Control Systems Security
    • Robert (Bob) Radvanovsky and Jacob (Jake) Brodsky put together this wonderful collection of articles from people throughout the community. It covers a wide variety of topics from a wide variety of personalities and professionals.
  • Protecting Industrial Control Systems from Electronic Threats
    • Joe Weiss is a polarizing individual in the community but only because of how passionately he cares about the industry and how long he’s been in the community. Many of us here today in the community owe much to Joe and this book offered an early look at control system cybersecurity.
  • Industrial Network Security
    • Eric Knapp and Joel Langill wrote this book looking specifically at the network security side of ICS. It’s a fantastic resource exploring different technologies and protocols by two professionals I’m glad to call peers and friends.
  • Hacking Exposed: Industrial Control Systems
    • This book takes a penetration testing focus on ICS and talks about how to test and assess these systems from the cybersecurity angle while doing it safely and within bounds of acceptable use inside of an ICS. It’s written by Clint Bodungen, Bryan Singer, Aaron Shbeeb, Kyle Wilhoit, and Stephen Hilt who all are trusted professionals in the industry.

Recommended Professional Training

You in no way need certifications or professional training to become great in this field. However, sometimes both can help either for job opportunities, getting a raise, or polishing up some skills you’ve developed. I highly encourage you to learn as much as you can before getting into a professional class (the more you know going in the more you’ll take away) and I encourage you to try to find an employer to pay your way (they aren’t cheap). If your employer doesn’t have a training policy it’s a good time to try and find a new employer. Here are two professional classes I like for ICS cyber security training (I’m biased because I teach at SANS but I teach there because I believe in what they provide).

  • Department of Homeland Security and Department of Energy Training
    • The ICS-CERT and Idaho National Labs provide a variety of online and in person training. One of the most well known is the ICS 301 class which is a 5-day introduction to ICS hosted in Idaho Falls, Idaho. It is a free course and highly recommended.
  • SANS ICS 410 – ICS/SCADA Essentials
    • This class is designed to be a bridge course; if you are an ICS person who wants to learn security, or a security person who wants to learn ICS, this course offers the bridge between those two career fields and offers you an introduction into ICS cyber security.
  • SANS ICS 515 – ICS/SCADA Active Defense and Incident Response
    • This is the class I authored at SANS teaching folks about targeted threats (such as nation-state adversaries or well funded crime groups) that impact ICS and how to hunt them in your environment and respond to incidents.
  • Assessing and Exploiting Control Systems
    • Justin Searle is the author of SANS ICS410 and he also made Assessing and Exploiting Control Systems. This course is an introduction to vulnerability and penetration testing of these systems with a focus on everything from PLCs to RF. A lot of the focus tends to be on smart grid and electric but there are elements for everyone. The same class is also hosted at SANS from time to time, but it is significantly cheaper to find it at BlackHat if you can grab a spot. The class moves around so the link above is for an old class but Google the name and where it’s being hosted to find it.
    • Matt Luallen runs the CYBATI class. It’s a hands on class that’s been tried and tested and is popular around the community. He sometimes teaches it at SANS events and also teaches at other events. Matt was one of the first people I met in the ICS security community and has been like a brother to me over the years; he’s a fantastic resource for the community and more importantly he’s just a really good person. Learning from him (and getting to use his CYBATIworks kit which is a really useful training kit for sale) is something everyone should get to do at some point in their career.
  • Dragos 5 Day Training
    • Dragos hosts a five day training that covers an introduction to ICS, assessing ICS, threat hunting, and security monitoring. Uniquely, it provides access to industrial ranges and is hosted in Houston, Texas and Hanover, Maryland. The industrial ranges and physical equipment make for an exciting educational experience. However, the class is only open to those in the asset owner and operator community (e.g. working at an energy, manufacturing, auto, etc. company).

Recommended Conferences

No matter how much time you spend reading or practicing eventually you need to become part of the community. Contributions in the form of research, writing, and tools are always appreciated. Contributions in the form of conference presentations are especially helpful as they introduce you to other interested folks. The ICS cybersecurity community is an important one on many levels. It’s one of the best communities out there with hard working and passionate people who care about making the world a safer place. Below are what I consider the big 5. These conferences are the ones that are general ICS cyber security (not a specific industry such as API for oil and gas or GridSecCon for electric sector) although those are valuable as well.

  • SANS ICS Security Summit
    • For over a decade the SANS ICS Security Summit has been a leading conference on bringing together researchers, industry professionals, and government audiences. The page above links to the various SANS ICS events but look for the one that says “ICS Security Summit” each year. It is usually held in March at Disney World in Orlando Florida. Its strong suit is the educational and training aspects not only because of the classes but also because of the strong industry focus.
  • DigitalBond’s S4
    • The S4 conference is a powerhouse of leading ICS security research. Dale puts on a fantastic conference every year (now with a European and Japanese venue as well each year) that brings together some of the most cutting edge research and ideas. S4 in the US is often held in January in Florida.
  • The ICS Cyber Security Conference (WeissCon)
    • Affectionately known as WeissCon after its founder Joe Weiss, the conference is now owned and operated by SecurityWeek and usually runs in October at different locations each year in the US (Georgia is usually a central location for the conference though). The conference brings together a portion of the community not often found at the other locations and has a strong buy-in from the government community as well as the vendor community.
  • The ICS Joint Working Group (ICSJWG)
    • The ICSJWG is a free conference held twice a year by the Department of Homeland Security. I often encourage people to go to the ICSJWG conference first as a type of intro into the community, to then go to the SANS ICS Security Summit for more view into the asset owner community and to get training, then go to S4 for the latest research, to go to WeissCon to see some of the portions of the community and vendor audience not represented elsewhere, and finally to CS3Sthlm to get an international view. It is perfectly ok to go to all five of the big conferences a year (I do) but if you need a general path that is the one I would follow initially.
  • CS3Sthlm
    • CS3Sthlm used to be known as 4SICS and is held every year in Stockholm, Sweden. It is one of the leading ICS security conferences in the world (I consider it one of the “big five”) and it is in my opinion the best ICS security conference in Europe. The founders Erik and Robert are some of the friendliest people in the ICS community and have a wealth of experience to share with folks from decades defending infrastructure.
  • Dragos Industrial Security Conference (DISC)
    • DISC is the Dragos annual conference however it is unique in that it is entirely dedicated to research and insights into the ICS cyber threats and responding to them. The conference is 100% free and open to those in the industrial asset owner and operator community. It happens every year on November 5th in Maryland, USA.

This is just a small collection of a lot of the fantastic resources out there. Always fight to be part of the community and interact – that is where the real value in learning is. Never wait to have someone show you though, even the “experts” are usually only expert in a few things. It is up to you to teach yourself and involve yourself. We as a community are waiting open armed.


Common Analyst Mistakes and Claims of Energy Company Targeting Malware

July 13, 2016

A new blog post by SentinelOne made an interesting claim recently regarding a “sophisticated malware campaign specifically targeting at least one European energy company.”  More extraordinary though was the claim by the company that this find might indicate something much more serious: “which could either work to extract data or insert the malware to potentially shut down an energy grid.” While that is a major analytical leap, we’ll come back to this, the next thing to occur was fairly predictable – media firms spinning up about a potential nation-state cyber attack on power grids.

I have often critiqued news organizations in their coverage of ICS/SCADA security when there was a lack of understanding of the infrastructure and its threats but this sample of hype originated from SentinelOne’s bold claims and not the media organizations. (Although I would have liked to see the journalists validate their stories more). News headlines included “Researchers Found a Hacking Tool that Targets Energy Grids on the Dark Web” to EWeek’s “Furtim’s Parent, Stuxnet-like Malware, Aimed at Energy Firms.” It’s always interesting to see how long it takes for an organization to compare malware to Stuxnet. This one seems to have won the race in terms of “time-to-Stuxnet”, but the worst headline was probably The Register’s with “SCADA malware caught infecting European energy company: Nation-state fingered”. No this is not SCADA malware and no nation-states have been fingered (phrasing?).

The malware is actually not new though and had been detected before the company’s blog post. The specific sample SentinelOne linked to, that they claim to have found, was first submitted to VirusTotal by an organization in Canada on April 21st, 2016. Later, a similar sample was identified and posted on the forum on April 25th, 2016 (credit to John Franolich for bringing it to my attention). On May 23rd, 2016 a KernelMode forum user posted on their blog some great analysis of the malware. The KernelMode users and blogger identified that one of the malware author’s command and control servers was misconfigured and revealed a distinct naming convention in the directories that very clearly seemed to correlate to infected targets. In total there were over 15,000 infected hosts around the world that had communicated to this command and control server. This puts a completely different perspective on the malware that SentinelOne claimed was specifically targeting an energy company and it’s obvious it is most certainly not ICS/SCADA or energy company specific. It’s possible energy companies are a target, but so far there’s no proof of that provided.

I do not have access to the dataset that SentinelOne has so I cannot and will not critique them on all of their claims. However, I do find a lot of the details they have presented odd and I also do not understand their claims that they “validated this malware campaign against SentinelOne [their product] and confirmed the steps outlined below [the malware analysis they showed in their blog] were detected by our Dynamic Behavior Tracking (DBT) engine.” I’m all for vendors showcasing where their products add value but I’m not sure how their product fits into something that was submitted to VirusTotal and a user forum months before their blog post. Either way, let’s focus on the learning opportunities here to help educate folks on potential mistakes to avoid.

Common Analyst Mistake: Malware Uniqueness

A common analyst mistake is to look at a dataset and believe that malware that is unique in their dataset is actually unique. In this scenario, it is entirely possible that with no ill-intention whatsoever SentinelOne identified a sample of the malware independent from the VirusTotal and user forum submission. Looking at this sample and not having seen it before the analysts at the company may have made the assumption that the malware was unique and thus warranted their statement that this campaign was specifically targeting an energy company. The problem is, as analysts we always work off of incomplete datasets. All intelligence analysis operates from the assumption that there is some data missing or some unknowns that may change a hypothesis later on. This is one reason you will often find intelligence professionals give assessments (high, medium, or low confidence assessments usually) rather than making definitive statements. It is important to try to realize the limits of our datasets and information by looking to open source datasets (such as searching on Google to find the previous KernelMode forum post in this scenario) or establishing trust relationships with peers and organizations to share threat information. In this scenario the malware was not unique and determining that there were at least 15,000 victims in this campaign would add doubt that a specific energy company was the target of the campaign. Simply put, more data and information was needed.

Common Analyst Mistake: Assuming Adversary Intent

As analysts we often get familiar with adversary campaigns and capabilities to an almost intimate level knowing details ranging from behavioral TTPs to the way that adversaries run their operations. But one thing we as analysts must be careful of is assuming an adversary’s intent. Code, indicators, TTPs, capabilities, etc. can reveal a lot. They can reveal what an adversary may be capable of doing and they should reveal the potential impact to a targeted organization. It is far more difficult though to determine what an adversary wishes to do. If an adversary crashes a server an analyst may believe the malicious actor wanted to deny service to it whereas the actor just messed up. In this scenario the SentinelOne post stopped short of claiming to know what the actors were trying to do (I’ll get to the power grid claims in a following section) but the claim that the adversary specifically targeted the European energy company is not supported anywhere in their analysis. They do a great job of showing malware analysis but do not offer any details around the target nor how the malware was delivered. Sometimes, malware infects networks that are not even the adversary’s target. Assuming the intent of the adversary to be inside specific networks or to take specific actions is a risky move and even worse with little to no evidence.

Common Analyst Mistake: Assuming “Advanced” Means “Nation-State”

It is natural to look at something we have not seen before in terms of tradecraft and tools and assume it is “advanced.” It’s a perspective issue based on what the analyst has seen before. It can lead to analysts assuming that something particularly cool must be so advanced that it’s a nation-state espionage operation. In this scenario, the SentinelOne blog authors make that claim. Confusingly though, they do not seem to have even found the malware on the energy company’s network they referenced. Instead, the SentinelOne blog authors claimed to have found the malware on the “dark web”. This means that there would not have been accompanying incident response data or security operations data to support a full understanding of this intrusion against the target, if we assume the company was a target. There are non-nation-states that run operations against organizations. HackingTeam was a perfect example of a hackers-for-hire organization that ran very well-funded operations. SentinelOne presents some interesting data and along with other data sets this could reveal a larger campaign or even potentially a nation-state operation – but nothing presented so far supports that conclusion right now. A single intrusion does not make a campaign and espionage type activity with “advanced” capabilities does not guarantee the actors work for a nation-state.

Common Analyst Mistake: Extending Expertise

When analysts become experts on their team in a given area it is common for folks to look to them as experts in a number of other areas as well. As analysts it’s useful to not only continually develop our professional skills but to challenge ourselves to learn the limits of our expertise. This can be very difficult when others look to us for advice on any given subject. But being the smartest person in the room on a given subject does not mean that we are experts on it or even have a clue of what we’re talking about. In this scenario, I have no doubt that the SentinelOne blog authors are very qualified in malware analysis. I do however severely question if they have any experience at all with industrial and energy networks. The claim that the malware could be used to “shut down an energy grid” shows a complete lack of understanding of energy infrastructure as well as a major analytical leap based on a very limited data set that is quite frankly inexcusable. I do not mean to be harsh, but this is hype at its finest. At the end of their blog the authors note that if anyone in the energy sector would like to learn more that they can contact the blog authors directly. If anyone decides to take them up on the offer, please do not assume any expertise in that area, be critical in your questions, and realize that this blog post reads like a marketing pitch.

Closing Thoughts

My goal in this blog post was not to critique SentinelOne’s analysis too much, although to be honest I am a bit stunned by the opening statement regarding energy grids. Instead, it was to take an opportunity to identify some common analyst mistakes that we all can make. It is always useful to identify reports like these and without malice to tear apart the analysis presented to identify knowledge gaps, assumptions, biases, and analyst mistakes. Going through this process can help make you a better analyst. In fairness though, the only reason I know a lot about common analyst mistakes is because I’ve made a lot of rookie mistakes at one point or another in my career. We all do. The trick is usually to try not to make a public spectacle out of it.

IRONGATE Malware – Lessons Learned for ICS/SCADA Defenders

June 2, 2016

I first posted this blog on the SANS ICS blog here.


FireEye uncovered a new piece of ICS malware that they released today and their way of approaching it both to the public and in pre-briefing to the media has been outstanding. The malware is not in the wild, is not a threat to the industry, but offers lessons learned and I believe the FireEye/Mandiant team’s handling of it deserves a good nod. This blog post will explain the background context to the malware, the details of the malware with my thoughts, and why it is important.

On April 25th I noticed the S4 European agenda noted that Rob Caldwell of the Mandiant ICS team would be doing a presentation on new ICS malware. I posted the blog here with some thoughts on what this meant for the community. Personally, I’m excited about DigitalBond’s inaugural ICS conference in Europe (the long running S4 in the US and Japan has always been a valued contribution to the larger community). I will offer one small critique though that is not meant to be negative in tone. The DigitalBond website lists that the malware is in the “wild”; the ICS malware that Mandiant released details about this morning, titled IRONGATE, is not in the wild, in my opinion. “In the wild” does not have a strict definition so I don’t think DigitalBond did anything wrong here, they were being extremely good stewards of the community to bring this information forward but when I consider malware “in the wild” I look for something actively infecting organizations. I was fortunate enough to exchange messages with Dale Peterson, who runs DigitalBond, and his understanding of the importance of the malware while avoiding hype matched my own. Shortly stated, he is spot on with his approach and I’m looking forward to the presentation on the malware at his conference. It is important to keep in mind though that this malware is more of a proof-of-concept or a security research project but still important. I will cover these details in the next section though.

After posting my blog I was able to get some insight into the malware from the Mandiant team under an embargo. I normally wouldn’t talk openly about the “behind the scenes” discussions but I want to call attention to this for a very heartfelt kudos to the Mandiant ICS team. I’m not a journalist and in no way could help the FireEye/Mandiant public relations (PR) effort with regards to the malware they were releasing. Like it or not this kind of PR has value to companies and so capitalizing on it is a big motivation to any company. But they wanted to run their analysis by me for outside critique. This is a lesson many companies would benefit from: no matter how expert your staff is it is always valuable to get an outside opinion to ensure you are defeating biases and groupthink. Luckily for both the Mandiant team and myself I was excited about their thoughts. They called attention to why the malware was important but were careful to note that it is not technically advanced, it is not an active threat, and that there should be caution in overhyping the malware itself.

Details on IRONGATE
IRONGATE is the name for a family of malware that the FireEye Labs Advanced Reverse Engineering (FLARE) team identified in late 2015. The malware targets a simulated Siemens control system environment and replicates the type of man-in-the-middle attack seen in Stuxnet. The man-in-the-middle attack allows it to record normal traffic to a PLC and play it back to an HMI. It has a number of decent capabilities like anti-Virtual Machine checks and Dynamic Link Library (DLL) swapping with a malicious DLL. In many ways it looks like a cool project although not technically advanced. The most important aspect of the malware itself is that it shows the interest and understanding of impacting ICS by the authors. The full report by FireEye has a good coverage of the malware. Moving past the technical paper though there are a few key points to extract out about the malware itself. The following is my analysis and should not be considered definitive:

  • The malware is the payload portion and not the delivery mechanism; the delivery mechanism has not been identified and likely does not exist. The malware looks to me as a research project, penetration testing tool, or a capability developed to test out a security product for ICS networks. This means it is standalone and I do not believe we will ever see it or a derivative in the wild although we may see the tactic it displayed in a different capability by different authors in the future
  • The main portions of the malware are written in Python and Mandiant identified the malware by searching VirusTotal. While the discovery of the malware was in 2015 the malware was submitted through the web interface (not automatically through an API) in Israel in 2014. The combination of a manual upload, the malware not being in the wild (e.g. actively infecting sites), and the tool being written in Python against a simulated environment makes me think that the malware is a penetration testing or security product demo tool and not a proof-of-concept for a capable adversary. Generally speaking, APTs do not normally write tools in Python and submit them to VirusTotal
  • The malware’s attention to ICS and its focus on mimicking capabilities present in Stuxnet reinforced what many of us in the community knew: ICS is a viable target and attackers are getting smarter on how to impact ICS with ICS specific knowledge sets. The unique nature of ICS offers defenders many advantages in countering adversaries but it is not enough. You cannot rest on the fact that “ICS is unique” or “ICS can be hard to figure out” as a defense mechanism. It is a great vantage point for defenders but must be taken advantage of or adversaries will overcome it.

With what appears to be a bit of downplaying the malware the question remains: is this important and why? The answer is: yes it is important.

Why Is This Important?
I’m personally not a huge fan of naming malware, especially malware not in the wild, so I will admit I’m not crazy about naming this tool “IRONGATE” but this is the state of the industry today and I would expect the same out of any security company. The malware is not in the wild and in my opinion looks more to be a research tool than it does a bearer of things to come. So why is it important? Simply put, we do not know a lot about the ICS specific capabilities and malware in the community. In other words: we do not have much insight into the ICS threat landscape. We often have overhyped and inaccurate numbers of incidents in the community (Dell’s 2015 review stating there were hundreds of thousands of ICS cyber attacks) or abysmally low numbers (the ICS-CERT’s metrics of ~250 incidents a year which are primarily reported from the intelligence community and not asset owner themselves). Somewhere in between those two numbers is the truth pertaining to how many cyber incidents there are each year in the ICS community. A much smaller portion of those incidents are targeted espionage, theft, or attacks. And yet we only know of three ICS specifically tailored pieces of malware: Stuxnet, HAVEX, and BlackEnergy2. Over focusing on the malware instead of the human threats who intend harm is a mistake. But malware is still the tool of choice for many adversaries. So learning about these tools and the tradecraft of adversaries is extremely important. Yet we are lacking insight into those data sets. This makes the IRONGATE malware more interesting in the ICS community than it would be in an IT security discussion.
Here are, in my opinion, the three most important takeaways from this piece of malware.

  1. The malware was submitted in 2014 to VirusTotal and no security vendor alerted on it. Against dozens of security products none flagged this as malicious. It was written in Python, had a module titled scada.exe, and was obviously malicious yet no product flagged it. If ICS related malware is sitting in public data sets undiscovered by the vendors who focus entirely on detecting malware then you can be sure that there is malware in ICS environments today that we have not even begun to identify and understand.
  2. The Mandiant ICS team at FireEye released a number of pieces of technical information such as MD5 hashes and sample names for the pieces of this malware family. Although I disagree in calling them indicators of compromise (IOCs) because nothing was compromised, the technical details are an important exercise. With this information, could a standard ICS organization search through the network traffic and host information for matches against this malware? I would state that the significant majority of the industry could not. That’s a serious issue. This is likely researcher malware or a pentesting tool and we as a community could not search community-wide for it. Although I’m a huge advocate of the amazing work being done in the industry today we are simply behind where we need to be. If we cannot search our environments for possible pentester malware we’re playing in a different league to find the top capabilities of foreign intelligence teams.
  3. Nothing displayed in this capability or any capability we have seen to date (Stuxnet, HAVEX, and BlackEnergy2 as well as non-targeted capabilities such as Slammer and Conficker) would be undetectable to a human analyst. We need security products in our environments although we cannot rely solely upon them. Those passive defenses that rule out the Conficker infections and ransomware malware are important to rule out noise that human analysts shouldn’t be focusing on. But at the end of the day it must be human defenders against human adversaries that secure the ICS. That active defense approach of monitoring the ICS and responding to real threats is required. Nothing about the capabilities to date would have bypassed human defenders. The harsh truth is many organizations simply are not looking at their ICS networked environment. The harsh truth is that as a community we are not where we need to be today but the inspiring reality is that we can change the industry quickly by countering threats in our environment by empowering and training our people to do so.

In closing, I would caution the community to not overhype IRONGATE. The Mandiant ICS team did not overhype it nor should journalists. It could be a resume generating event to go to your management and claim that the malware itself is a reason for security investments at the company. But this is still an important find. Going to decision makers and ICS organizations and simply asking: “How would you find this if it was in your network?” is an important question and exercise. Hype is not needed to show why this is worth your time to discuss and counter. Ultimately, as we consider this malware and capabilities like it we as a community will move to better understand the threat landscape and counter the capabilities we have not discovered yet. Defense is doable – but you actually have to do something, you cannot just buy a product to place on the network and claim victory.



Hype, Logical Fallacies, and SCADA Cyber Attacks

May 25, 2016

For a few years now I’ve spent some energy focusing on calling out hyped up stories of threats to critical infrastructure and dispelling false stories about cyber attacks. There have been a lot of reasons for this including trying to make sure that the investments we make in the community go against real threats and not fake ones. This helps ensure we identify the best solutions for our problems. One of the chief reasons though is that as an educator both as an Adjunct Lecturer in the graduate program at Utica College and as a Certified Instructor at the SANS Institute I have found the effects of false stories to be far reaching. Ideally, most hype will never make it into serious policy or security debates (unfortunately some does though). But it does miseducate many individuals entering this field. It hampers their learning when their goals are often just to grow themselves and help others – and I take offense to that. In this blog I want to focus on a new article that came out on the blog at the Huffington Post titled “The Growing Threat of Cyber-Attacks on Critical Infrastructure” by Daniel Wagner, CEO of Country Risk Solutions.  I don’t want to focus on deriding the story though and instead use the story to highlight a number of informal logical fallacies. Being critical of information presented as facts without supporting evidence is a critical skill for anyone in this field. Using case-studies such as this are exceptionally important to help educate on what to be careful to avoid.


Mr. Wagner’s article starts off simply enough with the premise that cyber attacks are often unreported or under-reported leading the public to not fully appreciating the scope of the threat. I believe this to be very true and a keen observation. However, the article then uses a series of case studies each with factual errors as well as conjecture stated as facts. Before examining the fallacies let’s look at one of the first claims which pertains to the cyber attack on the Ukrainian power grid in December of 2015:

“It now seems clear, given the degree of sophistication of the intrusion, that the attackers could have rendered the system permanently inoperable.”

It is true that the attackers showed sophistication in their coordination and ability to carry out a well-planned operation. A full report on the attacker methodology can be found here. However, there is no proof that the attackers could have rendered that portion of the power grid permanently inoperable. In the two cases where an intentional cyber attack caused physical damage to the systems involved, the German steel works attack and the Stuxnet attack on Natanz, both systems were recoverable. This type of physical damage is definitely concerning but the attackers did not display the sophistication to achieve that type of attack and even if they had there is no evidence to show that the system would be permanently inoperable. It is an improbable scenario and would need serious proof to support that claim.

Informal Logical Fallacies

The next claim though is the most egregious and contains a number of informal logical fallacies that we can use as educational material.

“The Ukraine example was hardly the first cyber-attack on a SCADA system. Perhaps the best known previous example occurred in 2003, though at the time it was publicly attributed to a downed power line, rather than a cyber-attack (the U.S. government had decided that the ‘public’ was not yet prepared to learn about such cyber-attacks). The Northeast (U.S.) blackout that year caused 11 deaths and an estimated $6 billion in economic damages, having disrupted power over a wide area for at least two days. Never before (or since) had a ‘downed power line’ apparently resulted in such a devastating impact.”

This claim led to EENews reporter Black Sobczak to call out the article on Twitter which brought it to my attention. I questioned the author (more on that below) but first let’s dissect this claim as there are multiple fallacies here.

First, the author claims that the 2003 blackout was caused by a cyber attack. This is contrary to what is known currently about the outage and is contrary to the official findings of the investigators which may be read here. What Daniel Wagner has done here is a great example of Onus probandi also known as “burden of proof” fallacy. The type of claim that is made is most certainly not common knowledge and is contrary to what is known about the event. So the claimer should provide proof. Yet, the author does not which puts the burden of finding the proof on the reader and more specifically anyone who would disagree with the claim including the authors of the official investigation report.

Second, Daniel Wagner states that the U.S. government knew the truth of the attack and decided that the public was not ready to learn about such attacks. He states this as a fact again without proof but there’s another type of fallacy that can apply here called the historian’s fallacy. In essence, Mr. Wagner obviously believes that a cyber attack was responsible for the 2003 blackouts. Therefore, it is absurd to him that the government would not also know and therefore the only reasonable conclusion is that they hid it from the public. Even if Mr. Wagner was correct in his assessment, which he is not, he is applying his perspective and understanding today on the decision makers of the past. Or more simply stated, his is using what information he believes he has now and is judging the government’s decision on that information which they likely did not have at the time.

Third, the next claim is a type of red herring fallacy known as the straw man fallacy where an argument is misrepresented to make it easier to argue against. Mr. Wagner puts in quotes that a downed powerline was responsible for the outage and notes that a downed line has never been the reason for such an impact before or since. The findings of the investigation into the blackouts did not conclude that the outages occurred simply due to a downed power line though. The investigators put forth numerous findings which fell into four broad categories: inadequate system understanding, inadequate situational awareness, inadequate tree trimming, and inadequate diagnostic support amongst interconnected partners. Although trees were technically involved in one element it was a single variable in a complex scenario and mismanagement of a difficult situation. In addition, the ‘downed power lines’ mentioned were high energy transmission lines far more important than implied in the argument.

Mr. Wagner went on to use some other fallacies such as the informal fallacy of false authority when he cited, incorrectly by the way, Dell’s 2015 Annual Security Report. He cited the report to state that cyber attacks against supervisory control and data acquisition (SCADA) systems doubled to more than 160,000 attacks in 2014. When this statistic came out it was immediately questioned. Although Dell is a good company with many areas of expertise its expertise and insight into SCADA networks was called into question. Just because an authority is expert in one field such as IT security does not mean they are an expert in a different field such as SCADA security. There have only been a handful of known cyber attacks against critical infrastructure. The rest of the cases are often mislabeled as cyber attacks and are in the hundreds or thousands – not hundreds of thousands. Examples of realistic metrics are provided by more authoritative sources such as the ICS-CERT here.

Beyond his article though there was an interesting exchange on Twitter which I will paste below.










In the exchange we can see that Mr. Wagner makes the argument “what else could it have been? Seriously”. This is simultaneously a burden of proof fallacy requiring Blake or myself to provide evidence disproving his theory as well as an argument from personal incredulity. An argument from personal incredulity is a type of informal fallacy where a person cannot imagine how a scenario or statement could be true and therefore believes it must be false. Mr. Wagner took my request for proof of his claim as absurd because he believed that there was no other rational explanation for the blackouts other than a cyber attack.

I would link to the tweets directly but after my last question requesting proof Mr. Wagner blocked Blake and me.


Daniel Wagner is not the only person to write using informal fallacies. We all do it. The trick is to identify it and try to avoid it. I did not feel my request for proof ended up being a fruitful exchange with the author but that does not make Mr. Wagner a bad person. Everyone has bad days. It’s also entirely his right not to continue our discussion. The most important thing here though is to understand that there are a lot of baseless claims that make it into mainstream media that misinform the discussion on topics such as SCADA and critical infrastructure security. Unsurprisingly they often come from individuals without any experience in the field to which they are writing about. It is important to try to identify these claims and learn from them. One effective method is to look for fallacies and inconsistencies. Of course, always be careful to not be so focused on identifying fallacies that you dismiss the claim too hastily. That would be a great example of an argument from fallacy, also known as the fallacy fallacy, where you analyze an argument and because it contains a fallacy you conclude it must be false. Mr. Wagner’s claims are not false because of how they were presented. The claims were not worth considering because of the lack of evidence, the fallacies just helped draw attention to that.

Fourth Sample of ICS Tailored Malware Uncovered and the Potential Impact

April 25, 2016

I first posted this piece on the SANS ICS blog here.


I looked at the S4 Europe agenda which was sent out this morning by Dale Peterson and saw an interesting bullet: “Rob Caldwell of Mandiant will unveil some ICS malware in the wild that is doing some new and smarter things to attack ICS. We are working with Mandiant to provide a bit more info on this in early May without doing the big reveal until S4xEurope.”
For those of you that don’t know, S4 is a conference run by Dale Peterson and this is their European debut (the other versions are in Florida and Japan and are staples of the ICS security conference scene always having hard hitting and top notch presentations). As a trusted conference, S4, and friend, Dale, I give a higher bit of credibility to anything that comes out of there than your typical conference. Add that to the fact that the Mandiant ICS team has a number of extremely credible voices (Rob Caldwell, Dan Scali, Chris Sistrunk, Mark Heard, etc.) this is even more interesting and credible.

Let’s break down what we know and why this is potentially very important.

Background on ICS Tailored Malware
To date there have been exactly three ICS tailored malware families that are publicly known. The first was Stuxnet which contained modules to target the Siemens’ systems at the Natanz facility in Iran and cause physical damage to the P-1 centrifuges. Second, there was the Havex malware used in the Dragonfly campaign (aliases include Energetic Bear and Crouching Yeti) that had a module that specifically searched for ICS specific ports (such as 102, 502, and 44818) and later more importantly an OPC module. Lastly, there was the BlackEnergy 2 malware which contained exploits and versions for GE’s CIMPLICITY and Siemens’ SIMATIC environments.

Why Haven’t We Seen More?
Most of us understand that ICS environments make for great targets especially for nation-state and APT styled actors. The ability for military posturing, political leverage, espionage, and even intellectual property theft make enticing targets. Yet, the numbers simply do not seem to align with the fear that many folks have about these environments being targeted. The question always comes up: why don’t see more ICS intrusions? I do not claim to know for sure but my running hypothesis is that it boils down primarily to three areas:

1. We do not have a lot of visibility into ICS networks. Many of the threats that we are aware of we know about due to vendors releasing reports. These vendors traditionally have end point security solutions and anti-virus in the networks that report back information to them. This allows the vendors to “see” tens of thousands of networks and the threats targeting them. In ICS we do not have these same products in scale and many are disconnected from the vendors (which is ok by the way and sometimes preferable). That combined with a lack of understanding of how to monitor these environments safely and interact with them creates a scenario where we don’t see much. Or in short, we aren’t looking.

2. Most malware families tend to be criminal in nature. APT styled malware is not as common in the larger security field. There simply isn’t as big of a motivation for criminals to make ICS specific malware families when ransomware, botnets, etc. work just as effectively in these environments anyway and they represent a smaller portion of the population. This is similar to the old Mac vs. Windows vs. Linux malware debate. One of the reasons we see more Windows malware is due to pure numbers and not because it’s less secure. There is more motive for criminals to write Windows based malware usually. For the APT styled actors, targeting ICS can be important for military and intelligence purposes but there isn’t as much motive to actually attack or bring down infrastructure outside of conflict scenarios; just to learn and position. I have my suspicions that there are a great number of ICS networks compromised with a large variety of ICS specific malware out there and we just haven’t seen the impacts to begin looking (see point #1).

3. ICS specific knowledge sets are rarer making it more difficult to create well-crafted and tailored ICS modules. The typical “cyber team” for nation-states are pretty good at Windows based environments but down in the lower ICS networks it requires system specific knowledge and engineering skills to truly craft something meaningful. This knowledge set is expanding though meaning we will definitely see more and more of these types of threats in the future.

Why is the Mandiant Discovery Potentially Important?
The claim that Mandiant has found a new ICS tailored piece of malware is important for a few reasons.

First, I have a good amount of respect for the Mandiant ICS team and if they say they’ve found something ICS specific I’ll still require proof when the time comes but I’m more inclined to believe them. Knowing the team members though I’m confident they’ll release things like indicators of compromise (IoCs) and technical knowledge so that the community can independently verify the find. This is great because many times there are claims made, even by trusted companies, without any proof offered. My general stance is that no matter how trusted the company is if there isn’t proof (for example the recent Version claim about the water hack) then it simply does not count. The community has been abused a lot with false claims and proof is required for such important topics.

Second, given that there have only been three ICS tailored malware families to have a fourth is incredibly interesting for the research both into the defense of ICS but also into the threat landscape. Understanding how the intrusions took place, what the targets were, and extracting lessons learned will be very valuable to both the technical and culture challenges in this space. It remains to be seen exactly what Mandiant means by “ICS specific” although I have messaged some trusted contacts and have been told that the agenda point isn’t a misprint; Mandiant claims to have found tailored ICS malware and not just an ICS themed phishing email or something less significant. Although I never wish harm on anyone from a threat and defense research perspective this is an amazing find.

Third, it bodes well for the ICS security industry as a whole to start making some more positive changes. There have been many ICS security companies around for years (security and incident response teams like LoftyPerch, independent consultants and contractors, red teams like Red Tiger Security, etc.) and even some dabbling by larger companies like Kaspersky and Trend Micro (who both have contributed amazing information on the ICS threat landscape). But the Mandiant ICS team in a way represents a first in the community. Mandiant, and its parent company FireEye, is a huge player in the security community. For years the Mandiant team itself has been widely respected for their incident response expertise. To have them come out and make a specific ICS team to focus on incident response was actually a big risk. It is common to see ICS products and services but many of the startups struggle much more than the media and venture capitalists would let on. Mandiant’s ICS play was a hope that the market would respond. To have the team come out with a fourth specific ICS tailored malware family bodes very well for the risk they took and with the appropriate coverage while keeping down hype this could be very important for the industry and market writ large. Of course the customers always get a big vote in this area but it could mean more folks waking up to the fact that yes ICS represent a target and yes the security community can calmly and maturely approach the problem and add value (again, please no hype, wallpapers, and fancy logos though for exploits and malware).

But Aren’t Squirrels More Damaging to the Grid?
I gave an interview to a journalist for a larger piece on squirrels and cyber threats with regards to the power grid and I believe it warrants a discussion in this piece’s context. The common joke in the community is that squirrels have done more damage to power grids than the US, China, Iran, Russia, UK, etc. combined. And it’s true. It is often stated by us in the industry to remind folks that the “OMG Cyber!” needs to calm down a bit and realize that infrastructure operators on a daily basis deal more with squirrels and Conficker than APT styled malware. However, we should not equate the probability of attacks with the importance of them. As an example, let’s consider the recent DHS and FBI report on the risk to the U.S. electrical infrastructure.

I have a lot of love and respect for many of the FBI and ICS-CERT personnel I’ve worked with. I can only describe most of them as extremely passionate and hard working. But, the claim that the risk of a cyber attack against U.S. electrical infrastructure was low was upsetting to me because of how it comes across. On the heels of the cyber attack that impacted the Ukrainian power grid the report seemed to downplay the risk to the U.S. community. It stood in direct contrast to Cyber Command’s Admiral Rogers who stated that “It is only a matter of the when, not the if we’re going to see a nation-state, group, or actor engage in destructive behavior against critical infrastructure in the United States.” He was specifically talking in context of what happened in Ukraine and the importance of it. As the head of both the NSA and the U.S. military arm for cyber it is appropriate for Admiral Rogers to have a good understanding of the foreign intelligence and foreign military threat landscape. For the DHS and FBI to contradict him, even if unintentionally, seems very misplaced in what their expertise and mission is; and this leads back to the squirrel comment.

It is not as important to think of probability with regards to destructive attacks and ICS focused intelligence operations. When a community hears of a “low probability” event they naturally prioritize it under other more high probability events. As an example, prioritizing squirrels over nation-state operations based on probability. The problem with doing that though is that the impact is so much more severe for this “lower probability” scenario that the nation must prioritize it for national security reasons. Telling the infrastructure operators, who really defend the grid not the government, to stay calm and carry on is directly competing with that need although the message should admittedly always avoid hype and alarmism. Mandiant coming out with the fourth variety of ICS tailored malware helps highlight this at a critical point in the debate both among infrastructure operators and policy makers.

Conclusion and What to Do
We won’t know exactly what the ICS tailored malware is, what it’s doing, or technical knowledge of it until Mandiant releases it. It could be a dud or it could be extremely important (knowing the Mandiant team my bet is on extremely important but let’s all remain patient for the details before claiming it to be so). However, infrastructure owners and operators do not need to wait for the technical details to be released. It is important to be doing industry best practices now including things such as network security monitoring internal to the ICS. The other three samples of ICS tailored malware were all incredibly easy to identify by folks who were looking. Students in my SANS ICS515 ICS Active Defense and Incident Response class (shameless plug) all gets hands on with these threats and are often surprised at how easy they are to identify in logs and network traffic. The trick is simply to get access to the ICS and start looking. Or in other words: you too can succeed. Defense is doable. So do not feel you need to wait for the Mandiant report. It is potentially very important and technical details will help hunt the threats but you can look now and maybe you’ll spot it, something else, or at the very least you’ll get familiar with the networks you should be defending so that it’s easier to spot something in the future whether its APT styled malware or just misconfigured devices. Either way – the most important ICS is your ICS and learning it will return huge value to you.

Minimum Viable Products are Dubious in Critical Infrastructure

December 4, 2015

Minimum Viable Products in the critical infrastructure community are increasingly just mislabeled BETA tests; that needs to be communicated correctly.

The concept of a Minimum Viable Product (MVP) is catching on across the startup industry. The idea of the MVP is tied closely to The Lean Startup model created by Eric Ries in 2011 and has very sound principals focused around maximizing the return on investment and feedback from creating new products. Eric defines the MVP as the “version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.” This enforces the entrepreneurial spirit and need for innovation combined with getting customer feedback about a new technology without having to develop a perfect plan or product first. An MVP is also meant to be sold to customers so that revenue is generated. In short, be open to testing things out publicly earlier, pivot based off of technical and market feedback, and earn some money to raise the valuation of the company and entice investors.

Personally, I believe the lean startup model as a whole is smart. I use some aspects of the model as CEO of Dragos Security. However, I chose not to use the concept of an MVP. Minimum Viable Products are dubious in critical infrastructure. I state this understanding that the notion of getting the product out the door and gaining feedback to guide its development is a great idea. And when I say critical infrastructure I’m focusing heavily on the industrial control system (ICS) portion of the community (energy, water, manufacturing, etc.). The problem I have though is that I have observed a number of startups in the critical infrastructure startup space taking advantage of their customers, albeit unintentionally, when they push out MVPs. This is a bold claim, I won’t point fingers, and I don’t want to come across as arrogant. But I want to make it very clear: the critical infrastructure community deals with human lives; mistakes, resource drains, and misguiding expectations impact the mission.

My observations of the startups abusing the MVP concept:

  • Bold claims are made about the new technologies seemingly out of a need to differentiate against larger and more well established companies
  • Technologies are increasingly deployed earlier in the development cycle because the startups do not want to have to invest in the industry specific hardware or software to test the technology
  • The correct customers that should be taking part in the feedback process are pushed aside in favor of easier to get customers because successes are needed as badly as cash; there is pressure to validate the company’s vision to entice or satisfy Angel or Seed investors
  • The fact that the technology is an MVP, is lightly (if at all) tested, and will very likely change in features or even purpose is not getting communicated to customers in an apparent attempt to get a jump start on the long acquisition cycles in critical infrastructure and bypass discussions on business risk
  • Customers are more heavily relied upon for feedback, or even development, costing them time and resources often due to the startups’ lack of ICS expertise; the startup may have some specific ICS knowledge or general ICS knowledge but rarely does it have depth in all the markets its tackling such as electric, natural gas, oil, water, etc. although it wants to market and sell to those industries

What is the impact of all this? Customers are taking bigger risks in terms of time, untested technologies, changing technologies, and over hyped features than they recognize. If the technology does not succeed, if the startup pivots, or if the customers burn out on the process all that’s been accomplished is significant mistrust between the critical infrastructure stakeholders and their desire to “innovate” with startups anymore. And all of this is occurring on potentially sensitive networks and infrastructure which have the potential to impact safety or the environment.

My recommendations to startups: if you are going to deploy technologies into critical infrastructure early in the development cycle make sure the risks are accurately conveyed and ensure that the customer knows that they are part of a learning process for your technology and company. This begs instant push-back: “If we communicate this as a type of test or a learning process they will likely not trust our company or technology and choose to go with other more established products and companies. We are trying to help. We are innovators.” And to my straw man here, I empathize greatly. Change is needed in this space and innovation is required. We must do better especially with regards to security. But even if we ignore the statistics around the number of failed technologies and startups that would stress why many should never actually touch an ICS environment I could comfortably state that the community is not as rigid as folks think. The critical infrastructure community, especially in ICS, gets cast in a weird light by many outside the community. My experience shows that the critical infrastructure community is just as innovative, and I would argue more so, than any other industry but they are much more careful to try to understand the potential impact and risks…as they should be.

My experience in a new technology startup: when the Dragos Security team was developing our CyberLens software we needed to test it out. Hardware was expensive and we could not afford to build out networks for every type of vendor’s ICS hardware and network communications. Although we have a lot of ICS knowledge on the team we all were keenly aware that we are not experts in every aspect of every ICS industry we wanted to sell to. Customer feedback was (and still is) vital. To add to this we were pressed because we were competing with larger more established companies and technologies but on a very limited budget. So, instead of trying to sell an MVP we simply launched a BETA instead; the BETA lasted over twelve months. How did we accomplish this? We spent $0 on marketing and sales and focused entirely on staying lean and developing and validating our technology. We made contacts in the community, educated them on what we wanted to do, advised where the technology was tested and safe to deploy, and refused to charge our BETA participants for our time or product since they were greatly helping us and keeping our costs down. In turn we offered them discounts for when our product launched and offered some of our time to educate them in matters we did have expertise. This created strong relationships with our BETA participants that carried over when we launched our product to have them join us as customers. We even found new customers when we launched based on referrals from BETA participants vouching for our company. Or more simply stated: we were overly honest and upfront, avoided hype and buzzwords, and brought value so we were seen as fellow team members and not snake oil salesmen. I recommend more startups take this approach even under pressure and when it is difficult to differentiate in the market.

My conclusion: the MVP model in its intended form is not a bad model. In many communities it is an especially smart model. And just because a company is using an MVP route in this space does not mean they are abusing anyone or falling into the pitfalls I listed above. But, as a whole, in the critical infrastructure community it is a process that is more often abused than used correctly and it is damaging in the long term. Customers are not cash cows and guinea pigs – they are investors in your vision and partners. Startups should still push out technologies earlier than trying to wait to create a perfect product without the right feedback, but call these early pushes what they are. It is not a Minimum Viable Product it is a BETA test of core features. Customers should not be asked to spend limited budgets on top of their time and feedback for it nor should they be misled as to what part of the process they are helping in. You will find the community is more likely to help when they know you are being upfront even with understandable shortcomings.