Minimum Viable Products are Dubious in Critical Infrastructure

December 4, 2015

Minimum Viable Products in the critical infrastructure community are increasingly just mislabeled BETA tests; that needs to be communicated correctly.

The concept of a Minimum Viable Product (MVP) is catching on across the startup industry. The idea of the MVP is tied closely to The Lean Startup model created by Eric Ries in 2011 and has very sound principals focused around maximizing the return on investment and feedback from creating new products. Eric defines the MVP as the “version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.” This enforces the entrepreneurial spirit and need for innovation combined with getting customer feedback about a new technology without having to develop a perfect plan or product first. An MVP is also meant to be sold to customers so that revenue is generated. In short, be open to testing things out publicly earlier, pivot based off of technical and market feedback, and earn some money to raise the valuation of the company and entice investors.

Personally, I believe the lean startup model as a whole is smart. I use some aspects of the model as CEO of Dragos Security. However, I chose not to use the concept of an MVP. Minimum Viable Products are dubious in critical infrastructure. I state this understanding that the notion of getting the product out the door and gaining feedback to guide its development is a great idea. And when I say critical infrastructure I’m focusing heavily on the industrial control system (ICS) portion of the community (energy, water, manufacturing, etc.). The problem I have though is that I have observed a number of startups in the critical infrastructure startup space taking advantage of their customers, albeit unintentionally, when they push out MVPs. This is a bold claim, I won’t point fingers, and I don’t want to come across as arrogant. But I want to make it very clear: the critical infrastructure community deals with human lives; mistakes, resource drains, and misguiding expectations impact the mission.

My observations of the startups abusing the MVP concept:

  • Bold claims are made about the new technologies seemingly out of a need to differentiate against larger and more well established companies
  • Technologies are increasingly deployed earlier in the development cycle because the startups do not want to have to invest in the industry specific hardware or software to test the technology
  • The correct customers that should be taking part in the feedback process are pushed aside in favor of easier to get customers because successes are needed as badly as cash; there is pressure to validate the company’s vision to entice or satisfy Angel or Seed investors
  • The fact that the technology is an MVP, is lightly (if at all) tested, and will very likely change in features or even purpose is not getting communicated to customers in an apparent attempt to get a jump start on the long acquisition cycles in critical infrastructure and bypass discussions on business risk
  • Customers are more heavily relied upon for feedback, or even development, costing them time and resources often due to the startups’ lack of ICS expertise; the startup may have some specific ICS knowledge or general ICS knowledge but rarely does it have depth in all the markets its tackling such as electric, natural gas, oil, water, etc. although it wants to market and sell to those industries

What is the impact of all this? Customers are taking bigger risks in terms of time, untested technologies, changing technologies, and over hyped features than they recognize. If the technology does not succeed, if the startup pivots, or if the customers burn out on the process all that’s been accomplished is significant mistrust between the critical infrastructure stakeholders and their desire to “innovate” with startups anymore. And all of this is occurring on potentially sensitive networks and infrastructure which have the potential to impact safety or the environment.

My recommendations to startups: if you are going to deploy technologies into critical infrastructure early in the development cycle make sure the risks are accurately conveyed and ensure that the customer knows that they are part of a learning process for your technology and company. This begs instant push-back: “If we communicate this as a type of test or a learning process they will likely not trust our company or technology and choose to go with other more established products and companies. We are trying to help. We are innovators.” And to my straw man here, I empathize greatly. Change is needed in this space and innovation is required. We must do better especially with regards to security. But even if we ignore the statistics around the number of failed technologies and startups that would stress why many should never actually touch an ICS environment I could comfortably state that the community is not as rigid as folks think. The critical infrastructure community, especially in ICS, gets cast in a weird light by many outside the community. My experience shows that the critical infrastructure community is just as innovative, and I would argue more so, than any other industry but they are much more careful to try to understand the potential impact and risks…as they should be.

My experience in a new technology startup: when the Dragos Security team was developing our CyberLens software we needed to test it out. Hardware was expensive and we could not afford to build out networks for every type of vendor’s ICS hardware and network communications. Although we have a lot of ICS knowledge on the team we all were keenly aware that we are not experts in every aspect of every ICS industry we wanted to sell to. Customer feedback was (and still is) vital. To add to this we were pressed because we were competing with larger more established companies and technologies but on a very limited budget. So, instead of trying to sell an MVP we simply launched a BETA instead; the BETA lasted over twelve months. How did we accomplish this? We spent $0 on marketing and sales and focused entirely on staying lean and developing and validating our technology. We made contacts in the community, educated them on what we wanted to do, advised where the technology was tested and safe to deploy, and refused to charge our BETA participants for our time or product since they were greatly helping us and keeping our costs down. In turn we offered them discounts for when our product launched and offered some of our time to educate them in matters we did have expertise. This created strong relationships with our BETA participants that carried over when we launched our product to have them join us as customers. We even found new customers when we launched based on referrals from BETA participants vouching for our company. Or more simply stated: we were overly honest and upfront, avoided hype and buzzwords, and brought value so we were seen as fellow team members and not snake oil salesmen. I recommend more startups take this approach even under pressure and when it is difficult to differentiate in the market.

My conclusion: the MVP model in its intended form is not a bad model. In many communities it is an especially smart model. And just because a company is using an MVP route in this space does not mean they are abusing anyone or falling into the pitfalls I listed above. But, as a whole, in the critical infrastructure community it is a process that is more often abused than used correctly and it is damaging in the long term. Customers are not cash cows and guinea pigs – they are investors in your vision and partners. Startups should still push out technologies earlier than trying to wait to create a perfect product without the right feedback, but call these early pushes what they are. It is not a Minimum Viable Product it is a BETA test of core features. Customers should not be asked to spend limited budgets on top of their time and feedback for it nor should they be misled as to what part of the process they are helping in. You will find the community is more likely to help when they know you are being upfront even with understandable shortcomings.

The ICS Cybersecurity Challenge is Live for Registrations

November 2, 2015

Over the past few weeks I’ve been working with Jon Lavender (Dragos Security CTO) to get the website and challenges ready for the ICS Cybersecurity Challenge. The SANS Institute is sponsoring the challenge and the purpose is to promote awareness for ICS cybersecurity while offering an opportunity for folks to test their knowledge and skills. The challenge can be found here and the registrations are now officially open. On November 15th the challenge files will be released to registered members and the contest will run through December 28th.

The challenge should feel familiar to anyone that used to participate in the DC3 challenges. I always thought very highly of those challenges and hated to see them go. I took a page out of their book to build this challenge and structure the questions as 100, 200, and 300 level challenges. These represent brand new, novice, and intermediate level challenges making the challenge accessible to a wide audience. There will be knowledge based questions regarding ICS security and standards as well as technical lab based questions where participants will download challenge files and search for the answers. As an example, I’ve infected a Windows based Human Machine Interface (HMI) and taken a memory dump of the system. Participants will have to identify the malware on the system for one question and create a YARA rule that alerts on the malware but not a baseline of the system for another question. In total there will be around 20 challenges for folks to participate in.

The idea is that this first year will be a free for all and gauge the interest from the community. Next year there will be two different structures to include a category for students and there will be an opportunity to compete as a team. As long as this year goes well, next year will also see some more advanced challenges (400 level and potentially one or two 500 level challenges). Participants will be able to submit their answers to the challenges through the website where I will review and assign points for each challenge. The overall winner will be announced early Jan 2016 and receive prizes (yay!). I’m finalizing the prize list now – SANS is being very gracious and donating all the prizes this year.

There will also be a second phase of the challenge to those also attending the SANS ICS Summit in Orlando, Florida next Feb. There will be flags spread throughout the conference and in technical challenges such as interacting with the ICS Wall. A second winner will be announced at the Summit and crowned the overall winner – with additional prizes (yay!).

Hopefully this presents a good opportunity for folks to learn more about ICS cybersecurity and test their skills. Please feel free to participate regardless of your skill level and tell others about it (especially high school and college students). The ICS cybersecurity community is just that – a community – and it takes us all getting better to raise the bar for defense.

Security Awareness and ICS Cyber Attacks: Telling the Right Story

October 7, 2015

This was first posted on the SANS ICS blog here.

 

A lack of security awareness and the culture that surrounds security is a widely understood problem in the cyber security community. In the ICS community this problem is impactful towards operations and understanding the scope of the threats we face. A recent report by the Chatham House titled “Cyber Security at Civil Nuclear Facilities”shined a light on these issues in the nuclear industry through an 18 month long project.

The report highlights a number of prevailing problems in the nuclear sector that make security more difficult; the findingsdo not represent all nuclear sector entities but take a look at the sector as a whole. Friction between IT and OT personnel, the prevailing myth that the air gap is an effective single security solution, and a lack of understanding the problem are all cited as major findings of the research group.

The group recommends a number of actions which need to be taken and these can be mapped along the Sliding Scale of Cyber Security. A big focus is placed on better designing the systems to have security built into them which can be understood in the Architecture phase of the scale. Another focus was on leveraging whitelisting and intrusion detection systems as well as other Passive Defense mechanisms instead of just an air gap. Lastly, one of the most significant recommendations was towards getting more personnel trained in cybersecurity practices (SANS offers ICS410 and ICS515 to address these types of concerns) and take a proactive approach versus a reactive approach towards finding threats in the environment — this recommendations maps to the Active Defense component of the scale which focuses on empowering analysts and security personnel to hunt for and respond to threats.

One of the more interesting major recommendations put forth by the report was:
“The infrequency of cyber security incident disclosure at nuclear facilities makes it difficult to assess the true extent of the problem and may lead nuclear industry personnel to believe that there are few incidents. Moreover, limited collaboration with other industries or information-sharing means that the nuclear industry tends not to learn from other industries that are more advanced in this field.”

At SANS we have consistently observed this as an issue in the wider community and try to bring the community together with events such as the ICS Summit to help address the concernand promote community sharing. No single event or effort alone though can fix the problem. A lack of information sharing and incident disclosure has led to a false sense of security while also allowing fake or hyped up stories in news media to become the representation of our industry to people in our community and external to it.

This aspect of infrequency of cyber security incident disclosure can be observed in multiple places. As an example, an article from 2014 by Inside Energy compiled incident reporting to the Department of Energy about electric grid outages and over 15 years noted that there were 14 incidents related to a cyber event. The earliest cyber attack was identified in 2003 but then there was a lack of events until 2011-2014 which made up the other 13 cases. It should be noted that the reporting for a cyber attack was any type of unauthorized access to the system including the hardware, software, and data.

We in the industry need to have better data so that we can more fully understand and categorize attacks along models such as the ICS Cyber Kill Chain to extract lessons learned. What is revealing about the Department of Energydata though is the lack of visibility into the ICS networked environment. As an example, in the data set there is a measured understanding of impact for physical attacks, fires, storms, etc. showing great visibility into the ICS as a whole but for every single event regarding cyber the impact was either labeled as zero or unknown; that in combination with no data for 2003-2011 is less representative of the number of events and more representative of missing data. It has become clear over the years that a significant number of ICS organizations do not have personnel that are trained and empowered to look into the network to find threats. This must change and the findings must be shared, anonymously and appropriately, with the community if we are ever to scope the true threat in the community and determine the appropriate resource investments and responses to address the issues.

The ICS community stands a unique opportunity to have our story told by our ICS owners, operators, and security personnel to understand and address the problem ourselves. Valuable compilations of data such as that by Inside Energy using the Department of Energy reports as well as the Chatham House report help reinforce this need. Without involvement from the community, the ICS security story will be told by others who may not have the appropriate experience to make the right conclusions and offer helpful solutions. The need for cyber security will influence change in the ICS community through national level policies, regulations, vendor practices, and culture shifts – it is imperative that the right people with real data are writing the story that will drive those changes.

Article on German Steel Mill Attack “Inside Job” is Just Hype

September 10, 2015

On Sept 9, 2015 an article posted on Industrial Safety and Security Source ran with the headline “German Steel Mill Attack: An Inside Job”. The first statement of the article states “The attack on a German steel mill last year was most likely the result of an inside job” – this is blatantly false. There simply isn’t a lot known about the attack as Germany’s BSI have never publicly released follow on details about the incident after their 2014 report nor has the victim. There is no way to assert any theory as being “most likely” and unfortunately this article used what has become a common practice in our industry – anonymous sources and other false reports.

As an example, the article tries to link Havex to the incident without any proof. In talking about the malware it states: “Trojanized software acted as the infection vector for the Havex virus, according to sources interviewed by ISSSource.” The problem with this is that there was no reason to use anonymous sources here or act like this is a revelation. It is well known through the analysis of actual samples that Havex used trojanized versions of software. No sources are needed to reveal what is revealed by a simple Google search. But it makes the story seem more mysterious and tantalizing to use such sources.

The article then compares the steel mill attack to Shamoon claiming that “former CIA officials” claimed the attack was also an insider threat. It directly follows this by stating that an anonymous congressional source had a suspicion that an employee at the steel mill on the payroll of a foreign government was responsible. Following this as a quote from an individual stating “I have heard indirectly that the attack was attributed to the Russians.” And of course following this there is talk of “cyber war”, more anonymous CIA officials, and references to the cyber attack on the BTC Turkey pipeline in 2008 – which the author of the article failed to realize has already been proven as a false story. Reports need to realize that incident responders with first hand experience, engineers and personnel on site during the incident, or people directly involved are impressive sources – “senior intelligence officials” or “congressional sources” are far from impressive – they are some of the most removed personnel from what happened, especially in an ICS incident, and thus the information they have is very likely unreliabile.

Statements in the article such as an anonymous CIA official stating “In all likelihood, it was the Havex RAT that was the culprit, but we lack proof of this” should have killed this article before it went to print. Anytime all the sources in an article are using gossip, rumors, and admitting there is no proof – it’s time to realize there’s nothing to publish.

The reason for me writing this blog about this though is that there are a large number of people in the industrial control system community who will read this article. In many cases it will be their only insight into this story. And not only will they be misinformed about the steel mill incident but they’ll also now believe the BTC pipeline story was real. There is a real threat in industrial control system community – the threat of hyped out and false news stories misleading the community.

Hype has a lasting effect on any community. In the ICS community it pushes folks to not want to share data that we need to understand the threat in fear that it will be twisted into click bait. It encourages organizations to invest resources against threats that aren’t real instead of the numerous well documented threats and issues in the community that do need invested in. It forces non-security community members to tune out everyone in the security community when they realize they’ve been misinformed. The ISSSource article is not the first or worst case of hype seen in the community but it is another addition in a long line of articles that threatens good discussion, measured responses, and proper resource investments that could otherwise help ensure the safety, reliability, and security of infrastructure.

 

Reflecting on the Leadership I Saw in the Air Force

August 1, 2015

Reflecting on the Leadership I Saw in the Air Force

Today is the first day in my adult life as a civilian.  I loved the people in the Air Force but there were numerous reasons it was time for me to leave. I will continue to work with my peers in the AF to write and try to influence positive change but I will keep my reasons for leaving to myself. It seems to be particularly common now days to complain publicly on the way out – maybe as a way to hope for change but also to cope with leaving. It’s common enough that the DuffelBlog (a joke news site) posted a mock opinion piece by the “greatest officer ever” leaving the military. I personally far more enjoyed the mock reply from the “Joint Chiefs” to that article. This is not to say that those in uniform do not deserve to publicly ask for change – those public statements can be a highly debated thing to do but serve as a catalyst for discussion that is, in my experience, vastly positive. It is the right of the service member to do as long as it’s from a place of hope and not cynicism or done while exiting. I took that luxury multiple times while in the Air Force.

Instead of talking about the negatives though I want to use this blog to talk about the positives: the leadership I saw in the Air Force. I often get asked what went right for me. People see my skills or accolades and wonder what the Air Force did for me or what occurred to make me more successful than I was when I entered. I would argue that the Air Force itself didn’t do anything. As one of my commanders once told me “Big Air Force doesn’t know who you are, but your squadron does, and we always take care of our own.” What helped me was passion and working hard – but it was the leadership within the Air Force that enabled and empowered that far more than I could have achieved on my own. There were many leaders I met in my career. Civilians, contractors, other services, allied countries’ members, and chiefly my enlisted troops. But a few leaders stuck out the most to me and had the most influence on me. I have extracted a few lessons below about the leaders I met and the personal stories about my journey to accompany the lessons. What they did for me through their leadership was develop who I am today. Hopefully, this will serve as a good piece to other leaders looking to inspire and lead in the Air Force cyber community.

Lesson 1: Tailor Rewards to Your Troops

There is a dangerous myth in the community today that more money is what is needed to encourage retention in the Air Force cyber community. The troops did not sign up for the paycheck. What people signed up for is contributing to an exciting and fulfilling mission. In the Air Force cyber community many of the young troops want personal growth and to learn new skills that they can apply. When they are not grown or allowed to flex their talents though stagnation occurs and they will eventually leave. Sometimes leaders try to reward their folks with things such as Quarterly or Annual awards. For many though those awards, while nice, are not true rewards.

While serving in the intelligence community I had an Air Force commander who understood this – and understood me. While he and his commanders rewarded me with awards and stratifications he understood me and that these things, while good for my career, were not what interested me. What interested me was leading troops and personal growth. My mother and father are retired Senior Master Sergeants (the second highest enlisted rank). I admire them. I grew up around enlisted and my dreams of being an officer were largely based around being able to serve the enlisted as a leader. But I also really enjoyed intel and cyber and wanted to remain technical. So what my commander did was tailor the rewards he wanted to give me for good work to who I was and to continually use those to grow me. He made me a Flight Commander ahead of when I normally should have been one which gave me troops to directly lead. He understood though that I wanted to remain technical so he allowed me to keep the role of an analyst and to take the position of a technical lead in the agency we supported. He understood me. He didn’t shy away from “over-tasking” me because he understood the duality of my jobs balanced my duties. And he knew he could leverage what I valued to reward me and keep me inline. And I would have served with him anywhere for it.

Lesson 2: Truly Lead by Example

We often hear the phrase “lead by example.” Sometimes folks believe that means running fastest in physical training. Or having a really sharp looking uniform. And while those things help it meant more to me when I saw my commanders lead by doing the mission better than I thought was possible given their responsibilities as commanders. My squadron commander in my intelligence squadron and the two group commanders I had led by example. My commanders were experts in their fields even while doing diverse intelligence work and sticking to the Air Force’s passion for “more with less” (i.e. under resourced but more missions). My squadron commander as an example made sure that instead of just letting his Flight Commanders do technical missions while also being AF leaders he would do the same. The responsibilities and time commitments of a commander are far more than 8 hours a day even without doing the mission. But this AF leader led the squadron and picked up a position to take part in the mission directly. He spoke multiple languages, knew the ins and outs of every operation, and challenged all of his subordinates as we tried to keep up with him.

As a small example, I one time had to analyze a threat to critical systems that drew the attention of senior Air Force leaders. The only tangible thing I had was a packet capture that I analyzed in Wireshark. My squadron commander wanted to make sure he knew what was going on so that he could support me as I had to brief the senior leaders on this threat and why it was such a big deal. He took an interest and asked about Wireshark. I jokingly told my squadron commander, who was an intelligence officer and never had done anything ‘cyber’, that if he wanted to learn he could start with Laura Chappel’s Wireshark Network Analysis – an 800 page book. So he did. He went home and over a week read the entire book, downloaded samples and analyzed them, and by the time I had to brief the senior leaders again had come to the exact conclusion I had about the threat. I was shocked. He led by example.

The stories are too many to recount here. My group commanders were the same. Both of them were experts in their fields and leveraged that to directly support the mission instead of “just” being a group commander. They knew everything that was going on, did all the normal morale work, the normal paperwork drills and exercises, and still found time to stay technical and further themselves. One of my group commanders as an example continually published articles challenging Air Force mindsets on weapon systems and tactics. This was right when I was aspiring to become an academic and he kept challenging me in this right. Any time I published an article or had some accolade that would have expanded my ego – he had already done it or was doing it again. He kept me in check. I could have become a pompous young officer – he made sure I was guided in my growth and forced humility on me. Through example and not words he reminded me that I was better than no one but was simply contributing to the work so many others were also doing. It was leadership by example when I needed it most.

Lesson 3: Top Cover Actually Means Something

Top cover in the military means that if senior leaders get angry or want to punish one of your subordinates – you cover them. You take the heat. I don’t know how many times I have heard AF officers claim to provide this but I found it rare that many truly meant it. My commanders did though and my group commander embraced it. At the time I had just written my “Failure of Air Force Cyber” article that quickly caught the attention of junior and senior ranks around the Air Force. Enlisted and junior officers emailed me daily with support and thanking me for the article – but daily I also received threats from senior officers. I don’t know how many times I was threatened with Article 15s or career lasting impacts from folks I had never heard of before. When my squadron commander found out he tried to ensure that everyone that wanted to threaten me came through him first. My group commander and he started fielding those emails and phone calls. That is an uncomfortable position when the person upset is a three star general.

I remember vividly my group commander pulling me into his officer over the article and some of the other pieces I had written. I thought I was dead. Instead of yelling at me or counseling me he mentored me. He showed his support of what I had written and echoed his own experiences writing unpopular but needed articles. He related to me. He gave me guidance on how to come off “less blunt” and how to try to more appropriately get my points across. He then also made sure that I knew there were senior leaders out there that supported what I had said. He printed off an email from a senior AF leader and handed it to me to read – it was the general offering support to me and what I had written. The letter encouraged me not to let the “REMFs” (an acronym that I learned then and love to this day) get me down and to keep pushing forward. I know it was my group commander that prompted that letter from the general and more importantly he didn’t just forward me the email. You see, he not only provided top cover for me and support when I needed it the most – but he protected the senior leader as well. By handing me a printed copy of the email and then taking it back he ensured that I wasn’t going to be over excited and start forwarding around the email that was definitely meant to be private. It was an extremely classy move by someone who truly understood what top cover meant.

Lesson 4: Leadership Requires Risk

I was truly lucky with the leaders I had while I was in the intelligence community. My commanders up to the highest level all were on the same page and pushed the mission forward together. It showed in the morale of the organization. I did not have favorites amongst my commanders – they were all who I needed them to be at the right time. But the one who changed my life the most and served as my closest mentor was my first group commander. His motto to the group and his squadrons was to take risks. He encouraged people to make mistakes. This is exactly when the AF culture was discouraging that very attitude. Taking risks in the Air Force could quickly lead to officers being forced to leave. There was a zero tolerance type atmosphere even in innovation towards the mission. He built a culture opposite of that.

When I arrived to my first squadron I did not fit in. I was this young “cyber officer” and “cyber” had just become a thing in the Air Force – previously it was all “comm” and “communications officers.” While I valued the mission, the captain who was my boss let me know that he both hated me because I was a “cyber” officer and because I had went to the Air Force Academy. I had heard there were some rivalries in the Air Force depending on where you received your commission – but I thought that it was far to petty to be realistic. He attempted to make my life hell and it was only through the leadership of amazing enlisted members that I found it worthwhile to come in to work each morning. I sought outside fulfillment and had begun publishing papers and giving presentations at conferences about “cyber” since I couldn’t mutter the word at work. This caught the attention of my group commander.

My group commander called me into his office and talked with me about my papers and presentations. To my surprise he had not only read everything I had written but had his own experiences and thoughts to contribute. It was quickly evident that while I was calling this thing “cyber” he had been doing it for years as “intel” in its various forms. He confided in me stories and experiences that made me understand more about the history of where we had been as a community and where we were going. But he also believed in taking risks. He began heavily investing in our intelligence community partners’ missions and efforts. He used Air Force troops to bolster the joint fight and worked heavily with his Army and civilian peers to ensure that we were doing our best to serve the American people – not our own efforts. For context, it is unheard of for leaders to give up people. You often hear in the Air Force how people protect their “rice bowls” to mean funding and personnel. Both are hard to come by. This group commander would give up both in a heartbeat if he thought it would help the mission.

He ended up asking me one day in a Matrix styled “red or blue pill” type choice if I wanted to go deeper into the Intelligence Community. It would require me to move to some place I had never heard of before and do a mission I was unfamiliar with. He ensured me I was a perfect fit – so I agreed. It set me on the path for everything I have today. Without that move I wouldn’t have met Dr. Thomas Rid and started my PhD, I wouldn’t have needed to work after hours to develop the skills I have today, and I would never have met my beautiful fiancée. The problem with all of this though – is I’m fairly confident the move and assignment wasn’t allowed. I hadn’t served my allocated time at the Air Force base I was at, the Air Force wouldn’t support me moving or give me the funds to do so, there was technically no position there for me to fill, and I’m certain a significant amount of rules were broken to get me to that new team. But my group commander understood what I needed for my personal growth, he understood the importance of that mission and the team out there, he believed I had the right skills to help, and he embraced risk instead of shying away from it. It may not seem like much to the average reader but he literally risked his career and his command not just in my case but in many others to ensure that the mission succeeded – not just that it looked good on paper for policy and reports.

Conclusion

There are two things that stick out in writing this that I feel are worth noting. The first, is a common theme amongst these four lessons. Each leader I admired and that helped me grow did what they said. It wasn’t a slide in a PowerPoint presentation, a quote in a signature block in an email, or just words over beer in a bar. Whether it was in taking risks, providing top cover, leading by example, or rewarding troops – whatever they said they would do they did. They did it so well that it was at times shocking that they kept to their word when any reasonable person would understand there should be exceptions granted (like not providing top cover to a young officer who decided to publish his thoughts Air Force wide).

The second is that none of my best leaders in the Air Force cyber community were cyber officers. That is not meant as a slight – I know firsthand that there are amazing cyber officers leading day in and day out as well – it’s just by the nature that most of my career as a “Cyber Warfare Operations Officers” was in the Intelligence Community – sampling bias at its best. Usually, I was the only “cyber” guy amongst all intelligence member teams. I make this point though because in the Air Force there is a serious problem with “rice bowls” and “tribes.” It can often times hamper the mission. But the people that took care of me the most, who grew me the most, weren’t in my “tribe” as far as career fields are concerned. They just led me because it was the right thing to do. They led me because they believed in me and wanted to empower me. They led me even when it wasn’t beneficial to them but because it was beneficial to the bigger Air Force and its mission. That’s leadership.

As I leave the Air Force, this piece is my therapeutic writing – but not in the normal complaint driven article that I mentioned as common at the start of this piece. Instead, my exit thoughts are only of pride and gratitude. The Air Force cyber community needs help. But it has amazing people and will succeed despite adversity. And I feel I have new adventures in front of me and limitless opportunities because of those experiences. I will miss it dearly. I am forever grateful. I owe everything to the leaders I met while I had the privilege of serving.

Data, Information, and Intelligence: Your Threat Feed is Not Threat Intelligence

July 9, 2015

This was first posted on the SANS Forensics blog here.

 

Threat feeds in the industry are a valuable way to gather information regarding adversaries and their capabilities and infrastructure. Threat feeds are not intelligence though. Unfortunately, one of the reasons many folks become cynical about threat intelligence is because the industry has pushed terminology that is inaccurate and treated threat intelligence as a solution to all problems. In a talk I gave at the DFIR Summit in Austin, Texas I compared most of today’s threat intelligence to Disney characters — because both are magical and made up.

When security personnel understand what threat intelligence is, when they are ready to use it, and how to incorporate it into their security operations it becomes very powerful. Doing all of that requires a serious security maturity in an organization. The biggest issue in the industry currently is the labeling of data and information as intelligence and the discussion of tools producing intelligence.

jp 2-0
Figure 1: Relationship of Data, Information, and Intelligence

One of the commonly referenced works when discussing intelligence is the U.S. Department of Defense’s Joint Publication 2-0: Joint Intelligence. Intelligence has been around a lot longer than the word ‘cyber’ and it’s important to look to these kinds of sources to gather important context and understanding of the world of intelligence. One of the graphics (Figure 1) presented in the publication shows the relationship of data, information, and intelligence. If the cyber threat intelligence community writ large understood this single concept it would drive a much better discussion than what is sometimes pushed through marketing channels.

Every organization has an operational environment. The physical location of the organization, the networked infrastructure they use, the interconnections they have with other networks, and their accessibility to and from the Internet are all portions of their operational environment. This operational environment contains more data than could ever be fully collected. Many organizations have difficulty collecting and retaining packet capture for their environment more than a few days (if at all) let alone all of the data. So collection efforts are often driven by tools that can reach into the operational environment and get data. On limited resources it usually takes analysts understanding where the most critical data is located and to collect it using the best tools available. Tools are required to make the most out of data collection efforts. The data in this form is raw.

This raw data is then processed and exploited into a more usable form. As an example, the packet capture that is run against an intrusion detection system generates information in the form of an alert. There should be more data than information. The information may have a sample of the data, such as the portion of the packet capture that matched the alert, and it is made available to the analyst with some context even if only “this packet matched a signature thought to be malicious”. Information can give you a yes or no answer. Another example would be an antivirus match against malware on a system. The raw data, the malware’s code, is matched against a signature in the antivirus system to generate an alert. This alert is information. It answers the question “is malware present on the system”. The answer could be incorrect, maybe the match was a false positive, but it still answered a yes or no question of interest. Tools are not required to make information but it is very inefficient to create information without tools. Most vendor tools that make claims of producing “threat intelligence” are actually producing threat information. It is extremely valuable and necessary for making the most of analysts’ time — but it is not intelligence.

Various sources of information that are analyzed together to make an assessment produce intelligence. Intelligence will never answer a yes or no question. The nature of doing intelligence analysis means that there will only be an assessment. As an example, if an intelligence analyst takes a satellite photo and notices tanks on the border of Crimea they can generate information that states that the tanks are on the border. It answers a yes or no question. If the intelligence analyst takes this source of information and combines it with other sources of information such as geopolitical information, statements from political leaders, and more they could then make an assessment that they state with low, medium, or high confidence that an invasion of Crimea is about to take place. It is impossible to know the answer for sure — there cannot be a yes or no — but the analysis created an intelligence product that is useful to decision makers. There should also be far more information than intelligence; intelligence is a refined product and process. In the cyber field we would make intelligence assessments of adversaries, their intent, potential attribution, capabilities they may be seeking, or even factors such as their opportunity and probability of attacking a victim. The intelligence can produce useful knowledge such as the tactics, techniques, and procedures of the adversary. The intelligence can even be used for different audiences which usually gets broken into strategic, operational, or tactical level threat intelligence. But it is important to understand that no tool can produce intelligence. Intelligence is only created by analysts. The analysis of various sources of information requires understanding the intelligence needs, analysis of competing hypotheses, and subject matter expertise.

By understanding the difference between data, information, and intelligence security personnel can make informed decisions on what they are actually looking for to help with a problem they face. Do you just want to know if the computer is infected? Threat information is needed. Do you just want raw data related to various threats out there? Threat data is needed. Or do you want a refined product that makes assessments about the threat to satisfy an audience’s defined needs? That requires Threat intelligence. This understanding helps the community identify what tools they should be acquiring and using for those problems. It helps guide collection processes, the types of training needed for security teams, how the security teams are going to respond, and more.

There is no such thing as threat intelligence data, there are no tools that create intelligence, and there is limited value for organizations that do not understand their operational environment to invest in threat intelligence. But when an organization understands the difference between data, information, and intelligence and understands their environment to be able to identify what constitutes a threat to them then threat intelligence is an extremely useful addition to security. I am a big believer in cyber threat intelligence when it is done correctly. It’s why I worked with Mike Cloppert and Chris Sperry to co-author SANS578 — Cyber Threat Intelligence. It is unlikely though that your threat feed is really threat intelligence. But it may be exactly what you’re looking for; know the difference so that you can save your organization time and money while contributing to security.

Three Takeaways from the State of Security in Control Systems Survey

July 7, 2015

This was first posted on the SANS ICS blog here.

 

The State of Security in Control Systems Today was a SANS survey conducted with 314 ICS community members and was released on June 25th. The whitepaper can be found here and the webcast here. A few things stuck out from the survey that I felt it appropriate to highlight in this blog.

  1. Energy/Utilities Represent

Energy/Utilities made up the most of the respondents with 29.3% in total. While the variables impacting this cannot be narrowed down it is likely that pressure from organizations such as NERC, heavy focus on energy protection in the U.S. in national media and politics, and market interest has at least driven security awareness. We also see an energy bias in other metrics on reporting such as the ICS-CERT’s quarterly reports. This is a both a good thing and an area for improvement. It is great to see the energy sector get heavily involved in events such as this survey, in training conferences, and major events like the electric sector’s GridEx. Personally, I’ve interacted with groups such as the ES-ISAC and been extremely impressed. Getting data from this segment of the community helps understand the problem better so that we can all make the appropriate investments in security.

Takeaway: We really need to do more to reach the other communities. Energy tends to be a hot topic item but it is far from the only industry that has security issues. Each portion of the ICS community from water to pharmaceuticals face similar issues. In the upcoming years hopefully reports like this SANS survey will be able to capture more of those audiences. I feel this is likely given the increased awareness in other industries I have seen even in the last few years.

 

  1. IT/OT Convergence Seen as 2nd Most Likely Threat

The number one vector the respondents felt was the most significant threat to their ICS was external threats. This makes sense given the increased understanding in the community regarding external actors and the cyber security of operations. However, interestingly the second top threat identified as the integration of IT into control system networks. I really liked seeing this metric because I too believe it presents one of the largest threat vectors to operations. ICS targeted nation state malware tends to get the most media attention. BlackEnergy2, Stuxnet, and Havex were all very concerning. However, it is far more likely on a day to day basis that not architecting and maintaining the network correctly will lead to decreased or stopped operations. The integration of OT and IT also presents a number of challenges with incidental malware that, while non-targeted, presents a significant risk as has been documented numerous times when important systems halt due to accidental malware infections such as Conficker.

Takeaway: The ICS community needs to be aware of external threats and realize that they pose the most targeted threat to operations. However, it was great seeing that issues revolving around the integration of IT and OT is accurately seen as a concern. Architecting and maintaining the OT network correctly to include safe and segmented integration, structuring such as the Purdue model, and ultimately reducing the risks associated with IT/OT convergence will go a long way for the security of the environment. The type of efforts required to reduce the risk of IT/OT convergence is also the same foundational efforts that help identify, respond, and learn from external threats and threat vectors.

 

  1. Lack of Visibility is Far Reaching

A significant portion of the group, 48.8%, stated that they simple did not have visibility into their environment. This could mean a number of things to include IT and OT not having visibility into each other’s processes and environment, lack of understanding of the networked environment, inability to collect data such as network traffic or logs, and a lack of a plan to pull together all stakeholders when appropriate. Each of these has been observed and continually documented as problems in the ICS community. What is interesting about this single metric though is that it impacts most of the other metrics. For example, respondents who do not have visibility into their environment will not be able to fully identify threats in their environment; 48.8% stated that they were not aware of any infiltration or infection of their control systems. Additionally, when a breach occurs it is difficult to respond correctly without visibility; 34% of the participants who had identified breaches stated that they had been breached multiple times in the last 12 months.

Takeaways: Nearly half of the respondents to the survey indicated that they did not have visibility into the environments. This makes it incredibly difficult to know if they have been impacted by breaches. It also makes it difficult to scope a threat and respond appropriately. I would bet that a significant portion of those participants who indicated they were breached multiple times had links between the breaches that they were unaware of due to a lack of visibility. Re-infections that occur due to not fully cleaning up after a breach are common in the IT and OT communities. ICS community members need to ensure that they are developing plans to increase their visibility. That means including all stakeholders (in both IT and OT), ensuring that at least sampling from the environment can be taken in the form of logs and network traffic, and talking with vendors to plan better visibility into system upgrades and refreshes. For example, a mirrored port on a network switch is a great resource to gain invaluable network traffic data from the OT environment that can help identify threats and reduce time and cost of incident response.

Follow on: To help with the discussion of visibility into the environment I will post two entries to the SANS ICS blog in the upcoming weeks. They will be focused on two of the beginning labs in SANS ICS515 — Active Defense and Incident Response. The first will cover using Mandiant’s free incident response tool: Redline and how to use it in an ICS to gather critical data. The second will cover using some basic features in Wireshark to sample network traffic and identify abnormalities.

Final Thoughts

I was very impressed with the participants of the SANS survey. Their inputs help give a better understanding into the community and its challenges. While the takeaways above focus on areas for improvement it is easy to look at the past few years and realize that security is increasing overall. Security awareness, trained security professionals, and community openness are all increasing. We have a long way to go in the community but we are getting better. However, there are many actions that can and should be taken today to drastically help security. First, we must be more open with data and willing to participate in spot checks, like surveys, on the community. Secondly, wherever there is a lack of a plan forward, such as IT/OT convergence strategies, the appropriate stakeholders need to meet and discuss with the intent to act. Thirdly, incidents are happening whether or not the community is ready for it. Appropriate visibility into the environments we rely on, incident response plans, and identified personnel to involve are all requirements. We can move the bar forward together.

Barriers to Sharing Cyber Threat Information Within the Critical Infrastructure Community

June 28, 2015

This was first posted on the Council of Foreign Relations’ blog Net Politics here.

 

The sharing of cyber threat data has garnered national level attention, and improved information sharing has been the objective of several pieces of legislation and two executive orders. Threat sharing is an important tool that might help tilt the field away from adversaries who currently take advantage of the fact that an attack on one organization can be effective against thousands of other organizations over extended periods of time. In the absence of information sharing, critical infrastructure operators find themselves fighting off adversaries individually instead of using the knowledge and experience that already exists in their community. Better threat information sharing is an important goal, but two barriers, one cultural and the other technical, continue to plague well intentioned policy efforts. Failing to meaningfully address both barriers can lead to unnecessary hype and the misappropriation of resources. Continue Reading