<description>&lt;h2&gt;Interview with Richard Ford Chief Scientist at Forcepoint:&lt;/h2&gt;&lt;h3&gt;Cyber Security Dispatch:&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Show Notes:&lt;/strong&gt;&lt;br /&gt;In this episode of the Cyber Security Dispatch, we talk with Dr. Richard Ford the Chief Scientist of Forcepoint. Dr. Ford has been in the industry for quite a while and he has seen the industry through the lens of many different job descriptions, which gives him a grounded perspective of the entire business. Through his grounded perspective he talks about the current problems that plague the security space and how some of these problems are the exact same ones that we’ve had 25 years ago; Dr. Ford advises that before people get into complex security concepts such as resilience we ought to nail down these basic problems that have been put off for 25 years. We continue on about how the industry has too high demands in expecting the entire population to think in security-oriented manner. Rather we should be trying to move toward security systems that accommodate humans habits rather than the other way around. We end on what human-centric implementations of security look like and even hear an example from Dr. Ford.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Key Points From This Episode:&lt;/strong&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Richard’s path in security that lead to his current perspective of cyber-security&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;How we should really be defining “resilience”?&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Why, in order to move on in security, a lot of us still need to master the basics&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;What are the origins of ransomware and are the problems we deal with nowadays really new?&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Is the security space ready for things like resiliency?&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The large execution gap between those who do security well and those who don’t&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;What can cyber-security can learn from car safety features?&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;How to take on human-centric focus in&lt;strong&gt; &lt;/strong&gt;cyber-security&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Top five ways to look at cyber-security from a “safety” perspective&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Human-centric security implementations created by Forcepoint&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Richard’s experiment of how students react to security warnings&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;strong&gt;Links Mentioned in Today’s Episode:&lt;/strong&gt;&lt;br /&gt;Dr. Richard Ford —&amp;nbsp;&lt;a href="https://www.forcepoint.com/company/biographies/dr-richard-ford-0"&gt;https://www.forcepoint.com/company/biographies/dr-richard-ford-0&lt;/a&gt;&lt;br /&gt;Dr. Richard Ford LinkedIn —&amp;nbsp;&lt;a href="https://www.linkedin.com/in/dr-ford/"&gt;https://www.linkedin.com/in/dr-ford/&lt;/a&gt;&lt;br /&gt;RSA —&amp;nbsp;&lt;a href="https://www.rsaconference.com/"&gt;https://www.rsaconference.com/&lt;/a&gt;&lt;br /&gt;Forcepoint —&amp;nbsp;&lt;a href="https://www.forcepoint.com/"&gt;https://www.forcepoint.com/&lt;/a&gt;&lt;br /&gt;Virus Bulletin —&amp;nbsp;&lt;a href="https://www.virusbulletin.com/"&gt;https://www.virusbulletin.com/&lt;/a&gt;&lt;br /&gt;Sapir–Whorf Hypothesis —&amp;nbsp;&lt;a href="https://en.wikipedia.org/wiki/Linguistic_relativity"&gt;https://en.wikipedia.org/wiki/Linguistic_relativity&lt;/a&gt;&lt;br /&gt;Morris Worm —&amp;nbsp;&lt;a href="https://en.wikipedia.org/wiki/Morris_worm"&gt;https://en.wikipedia.org/wiki/Morris_worm&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br /&gt;Welcome to another edition of Cyber Security Dispatch, this is your host Andy Anderson. In this episode, Human-centric Security, we talk with Richard Ford, Chief Scientist of Forcepoint. In this episode, we talk about what human-centric security is and why we should move towards it’s implementation. We also talk about back to basics and why it is a necessary movement. Here’s Dr. Richard Ford.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;TRANSCRIPT&lt;/strong&gt;&lt;br /&gt;&lt;strong&gt;Andy Anderson:&lt;/strong&gt; Just introduce yourself for the audience that don’t know you - they should but maybe not yet.&lt;br /&gt;&lt;strong&gt;Richard Ford:&lt;/strong&gt; Sure my name is Dr. Richard Ford. I feel like I’ve been doing security forever; on my RSA badge they even gave me a little tag which says “seasoned” - and I’m not really sure how to take that. But I got into security around I don’t know 89-90 and that’s been my whole life - it’s been a lot of fun. A little bit of time on the offensive side of the house, a lot of time on the defensive side of the house.&amp;nbsp;&lt;br /&gt;This is really my passion and I do this because I love it. So I’m the Chief Scientist at Forcepoint, and in that role I’m steering technology across the whole company. It’s a blast because I get to do the sort of fun part of research and then sort of hand it off to somebody else to implement and we all know it’s that last 20% that’s the hard bit.&lt;br /&gt;&lt;strong&gt;AA:&lt;/strong&gt; I was talking with somebody last night and they were said: “My job is to create the dream and then its this dude’s to” and he points to his friend “make it happen.” You know? I wanted to be the first guy - I don’t wanna be the second&lt;br /&gt;&lt;strong&gt;RF: &lt;/strong&gt;Absolutely correct that last 20% to make it operational that’s the hard part.&lt;br /&gt;&lt;strong&gt;AA:&lt;/strong&gt; But - I mean you’ve been in security for a long time, but not always on the vendors’ side right? You were a professor for a while, you’ve been a journalist as well. So talk about kind of some of those roles and the different perspective that kind of gives you.&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; Yeah actually - I really liked the way you phrased the question because it really is about a different perspective. So my first job in the security industry was pulling apart viruses apart as a journalist - it was great. I was still a student and they’re like “We’ll give you X pounds for every virus you disassemble.” So I thought ‘This is free money this is great’ - I do this for free. So I went from there into being a journalist really - being the second editor of Virus Bulletin, which is a great publication still in business today.&amp;nbsp;&lt;br /&gt;That experience of actually working on that side of the table was probably some of the valuable time that I’ve had in the security industry. Because it taught me how to write, but it also taught me to figure out: always put the user first. Right? You take the user perspective - you don’t take the vendor perspective in that role = that’s really important.&amp;nbsp;&lt;br /&gt;And, yeah, it’s been a long and varied career and some of the high points for me - working at IBM Research - is a time in my life that I’ll never forget; IBM Research is an awesome place to go and hang out. It was like being a kid in a candy store - we’d site around and you’d just bump into people in the corridor and find out that they are working on the coolest thing that you could imagine - that was a fantastic role for me.&amp;nbsp;&lt;br /&gt;And, yeah, you know we hit it fairly well at one point and I retired into academia. I spent all of these years in commercial - started off a journalist - went into commercial and the research side of the house. And then I moved into academia and that was a very rewarding time in my life. I’m still in touch with so many of my students - if any of my students see this: I’m easy to find on LinkedIn students, you know, I probably still remember you. I think that they’ve left more of an impression on me than I have on them.&amp;nbsp;&lt;br /&gt;Eventually it was one of my former students who called me and said “Dr. Ford how would you like to be Chief Scientist at this company we’re standing up.” I said “Sure, Brian, but you’re going to have to stop calling me Dr. Ford” And he was like “Okay, Dr. Ford, I will.” So that’s sort of how I ended up at Forcepoint and the reason that that was attractive was that Forcepoint was trying to things a little bit differently. So I mean you’ve been around the show floor a lot and I mean this with no disrespect to the folks that are down there but it’s pretty samey, right? In some ways. There’s a lot of buzz in security and there’s clearly a lot of money flowing around but it’s not very well differentiated - everyone is going to stop you and go “We can solve you blah-blah-blah problem.” So we’ve got a universe full of point products and that’s not very excited to me - I went into academia because I wanted think deep thoughts. And the reason that Forcepoint was a fit for me was the we were going to shake things up a little bit so that’s been kind of fun.&amp;nbsp;&lt;br /&gt;&lt;strong&gt;AA:&lt;/strong&gt; Yeah I mean I think that there’s a number of reasons that you’ve got all these different solutions but I’d love to talk about what is - even if it’s not been realized in terms of actual usable products that are well-known and utilized throughout the space - it does sound like there are some interesting things - particularly coming out the academic, some the defense community - in terms of, particularly, like the cyber resiliency is an area that I am excited about since it does seem like that is talking, at least thematically, strategically, about doing things differently. Right?&lt;br /&gt;&lt;strong&gt;RF: &lt;/strong&gt;Yeah so I think that resilience is a really important topic and it has been woefully underexplored, right? Especially in the commercial world; there are a few vendors that have been wandering around the show floor talking about resilience.&amp;nbsp;&lt;br /&gt;But often the way that we talk about it is not well-formed, since we don’t define the word very well. So often you’ll start to you’ll start talking about something else - talking about resilience. And what they’re actually talking about is robustness, so if something is very strong - you can’t bend it - you can’t move it - that’s robust - you can’t break it. But if something is like a blade of grass - you tread on it, you take &amp;nbsp;your foot up, and it springs back up - that process of recovery, of coming back - you can bend me but I don’t break; I spring back to shape - that’s resilience. And I think that one of the challenges in this industry is that we are very sloppy in how we use our words - this is something that I was a pain at to my students.&amp;nbsp;&lt;br /&gt;&lt;strong&gt;AA: &lt;/strong&gt;Well you’re an Oxford grad. You have the OED right - Oxford English Dictionary. You care about words, right?&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; I do care about words, because I think that it’s also the Sapphire-Whorf hypothesis, which says the words that you shape - words are thought it’s basically. I mean, linguistically relative to the - am I totally sold on that? Not necessarily, but there’s some truth to it. If you can’t express it in words there’s a chance that you don’t think about it cleanly, because words are the language of thought. And so, yeah, words really matter.&amp;nbsp;&lt;br /&gt;And so being really crisp around the words that you use, so you and I can communicate, is really important. So, yes, resilience is interesting but only when we talk about it in the context of: I took a hit, I got hit, and then I came back up; that’s resilience, and that’s interesting.&lt;br /&gt;&lt;strong&gt;AA: &lt;/strong&gt;Yeah. I just use the word hard-to-kill, right? With the three words, yeah.&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; Yeah I like that. “Hard-to-kill”; It’s visceral.&amp;nbsp;&lt;br /&gt;&lt;strong&gt;AA:&lt;/strong&gt; Cause it’s not only the - it’s about recovery but it’s also I think - and that makes sense and it makes me think of cockroaches and rats, right? And it’s not just one right. One you can step on it, right? But there’s also millions of them, right? And so that - dynamism, diversity, all of these other things that fall under resiliency. Where are you seeing that - I know academically it’s been talked about a lot - but where are you starting to see maybe resiliency begin to form in the sort of -?&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; Right, so of course, in defense world there’s been considerable interest in resilience - the idea of, yeah, you know you survive. The system survives, possibly in a degraded state, but it will come back up. There’s also been very good academic research. The challenge of moving resiliency, sometimes, into the real world or the commercial world is: we’re still messing up the basics. Right?&amp;nbsp;&lt;br /&gt;I mean we’ve still got people with bad passwords out there. We’ve still got bad password reset policies. We’ve still got companies that will send emails saying “Your password will expire in seven days. Please click on this link to reset it.” And it’s a real email - it’s not a phishing attack. So resiliency is important, but the problem is that you’ve got this massive skill difference between the agencies that do cyber extremely well and the agencies that just make the most basic mistakes. And some of these problems that we’ve been finding today have been around forever. &amp;nbsp;First piece of ransomware - when was it? Take a guess. Is it a new problem? I mean, seems?&lt;br /&gt;&lt;strong&gt;AA: &lt;/strong&gt;I don’t know; like Morris Worm era?&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; Yeah. You’re exactly right. We’re going back to-&lt;br /&gt;&lt;strong&gt;AA:&lt;/strong&gt; Like early 90’s or-&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; AIDS Trojan-[inaudible]-gap, which was a hand-mailed, snail-mail piece of malware that was on a disk-&lt;br /&gt;&lt;strong&gt;AA:&lt;/strong&gt; Like with your AoL disks, right? Those CDs?&lt;br /&gt;&lt;strong&gt;RF: &lt;/strong&gt;No, no. They actually sent you a disk. And it was sent around to I think - now I’m going from fallible memory - I think it was about 5,000 was the first round. And what it would do was somewhere in the U-lay it said if you don’t pay money we will make your system unusable or words to that effect. And we’re still dealing with that problem. Wanna-&lt;br /&gt;&lt;strong&gt;AA: &lt;/strong&gt;It’s the newest thing that’s come out.&lt;br /&gt;&lt;strong&gt;RF: &lt;/strong&gt;-WannaCry was exactly that problem right? It just didn’t use five and a quarter inch disks. It was a little bit faster.&amp;nbsp;&lt;br /&gt;But it’s exactly the same problem set so if we’re still fighting the battles that we were fighting - what almost 30 years ago, 25 years ago - my question to you is: is this community really ready to step into the complexity that resiliency brings? Because, you pay for resiliency - if you study biological systems you will see that the most diverse system is usually the most resilient - that diversity of species leads you to resiliency. And so there is a real challenge there because what is another word for diverse when we are talking about differences - it’s also a word for complexity. So when you have systems that can sort of move and adapt, those are potentially more complex systems and currently we’re dealing with a world where people are still getting nailed by 25 year old style attacks.&amp;nbsp;&lt;br /&gt;So there is a tension that I don’t think we’re honest enough about in the industry to say look there is a huge difference in capability between end and the bottom end - and I say bottom end with no disrespect to people or organizations that I would put at that bottom end of security; because they shouldn’t have to care about security. Right?&amp;nbsp;&lt;/p&gt;













































&lt;figure &gt;
  &lt;blockquote data-animation-role="quote" data-animation-override&gt;
    &lt;span&gt;&amp;#147;&lt;/span&gt;I think this idea that suddenly everybody has to be part of the security solution - to me that doesn’t speak to human nature. I think that we have to build much more human-centric systems - systems that actually accommodate how you and I actually work rather than going: ‘Well Richard is going to be completely logical at all times. He will step into the security world.’&lt;span&gt;&amp;#148;&lt;/span&gt;
  &lt;/blockquote&gt;
  &lt;figcaption class="source"&gt;&amp;mdash; Richard Ford&lt;/figcaption&gt;
&lt;/figure&gt;



  &lt;p&gt;When you drive your car sit there and go “Huh I know exactly how the timing chain is working or the ignition.” You don’t think about the mechanics of it you just drive your car. &amp;nbsp;And so I think this idea that suddenly everybody has to be part of the security solution - to me that doesn’t speak to human nature. I think that we have to build much more human-centric systems - systems that actually accommodate how you and I actually work rather than going: “Well Richard is going to be completely logical at all times. He will step into the security world.”&lt;br /&gt;&lt;strong&gt;AA:&lt;/strong&gt; Right. I mean it’s sort of bananas that we talking that humans are fallible - this is news to anyone? I think that the car analogy is one that is very good. The experience of - I mean humans are intricately involved in the driving of cars - although maybe not as much heading forward - but the systems around them got much better in terms of helping that person stay safe and when they make mistakes there’s airbags and seatbelts and all of those sorts of things. And I’m not seeing that tolerance for mistakes in the security space as much-&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; Ah and it’s the word, right? If we change that word from security to a much nicer word, to me, which is “safety”, you start to design things in a different way. You don’t design a car going “I will make my car secure” - although, having seen some of the talks, maybe we should be doing more of that. You design a car going “How do I make is safe”: how do I accommodate the real fallible human nature of people like: how people get distracted when they drive, so maybe I’ll make the steering wheel rumble when they get to the edge of the lane.&amp;nbsp;&lt;br /&gt;That’s designing for safety rather than designing for security, and I like the difference in mindset very much, because I think that when you switch to a safety mindset you become more human. You go: “What’s the human really going to do?” Rather than saying “ You must be sitting and thinking about security at all times,” because guess what - you don’t. And you shouldn’t have to right? Security is a means to an end.&lt;br /&gt;&lt;strong&gt;AA:&lt;/strong&gt; So you’ve been in this a long time thinking about it deeply on all kinds of levels when you see people starting to think from that cyber safety perspective, what are they doing? What are the of top three, top five things that if you’re thinking from a safety mindset they’re doing?&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; Well the first thing is that you recognize that people are people, and - blunty - the next four things are all: recognize that people are people.&amp;nbsp;&lt;br /&gt;&lt;strong&gt;AA: &lt;/strong&gt;Look back at number one, right?&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; Yeah exactly. That is the key to a safety-based system is that: you should make the default safe; you should make the default usable for what the person is trying to accomplish too. So if the car wouldn’t start because it’s “so safe” that’s not very helpful. So it’s really this very human-centered design that looks at how you and I will naturally operate those machines, and how we can use that as a way of accomplishing a task.&amp;nbsp;&lt;br /&gt;Remember, again - we’ve talked about this - security is a means to an end; it is not an end in and of itself. I don’t do security because I’m doing security; I’m doing security because I’m trying to keep my people and my data safe. And again there is a lot of that security culture - it’s like the cult of security, where security becomes “the thing”. And I’d like to remind people that security is a way of getting something done.&amp;nbsp;&lt;br /&gt;You don’t sit down at your computer to do security, typically, unless you’re one of maybe one of ten people in an organization. What you do is you sit down at a computer to do business and security is an enabler to that business but it’s not your primary focus.&lt;br /&gt;&lt;strong&gt;AA: &lt;/strong&gt;Yeah. I think that I’ve heard it a lot kind of talked as a measure of quality just the way we measure other things security particularly in the software development world. Security should just the way it does next to performance or the ability to do the tasks, right? Security should be right there in the mindset and I think that’s true in every one of - sort of pulling security into the business, right? The business should really be the owner of the security.&amp;nbsp;&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; Right because security is a business function because it is a business enabler, and what do is it not remove those risks, because business is risk. That’s what we do; we go out and sell a product on the market, that’s risk; you get behind the wheel of your car, that’s risk. You accept it and what you do is you mitigate that risk.&amp;nbsp;&lt;br /&gt;So we sometimes get into this “We’re going to make it completely secure. We’re going to make it completely safe.” - Nah it’s all risk. It’s about managing risk in an intelligent way to let you do what you want to do.&lt;br /&gt;&lt;strong&gt;AA:&lt;/strong&gt; So, you know, just to bring it down to a concrete level, on this show we kind of like to talk about things that we likes and things that we thought were cool. Where have you encountered like “Hey somebody was really using that mindset and they come up with something that made me thing ‘That’s a cool way to kind of think about safety’ or human-centric kind of - ”&lt;br /&gt;&lt;strong&gt;RF: &lt;/strong&gt;Right. So I mean, of course, us. Right, I mean that’s the reason that I’m here. But-&lt;br /&gt;&lt;strong&gt;AA:&lt;/strong&gt; Well how do you do it then. What are the ways that you do it in your own product.&amp;nbsp;&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; Sure so a lot the design work that we do starts from the user interface. It starts with: ‘how is this product going to engage with the user’.&amp;nbsp;&lt;br /&gt;&amp;nbsp;So one the pieces of research that I did with a colleague when I was back at university - so we bought an eye tracker and eye trackers are so much fun, right? So it’s a little dual camera that sits at the bottom of the screen and it will show you exactly where the eyes are looking on the screen. What we did was we got a bunch of students - and I think we paid them in pizza which is like the universal student currency - what we did was we had a little competition.&amp;nbsp;&lt;br /&gt;They had to complete some tasks with a timer how fast and how accurately you can do the tasks, and that wasn’t the experiment. The experiment was, half-way through, some security warnings came up from the machine and we could see exactly how long they looked at the warning and what they were looking for and what they were looking for was the cancel button.&lt;br /&gt;They didn’t generally read what the warning was - like “Let me get rid of this annoying box so I can keep on with my task” right? “Is there a close button?” No, really. It went to the top right; it was a scary piece of research. What you have to do is design your interactions with a security product with the human in mind. So here’s an example. If I’m going to interrupt you in a task with a security warning; if it’s: ‘Your Flash is out-of-date. It needs updating,” let’s say. If you’re not on the web, wait until you switch tasks it's a much more human way of doing it rather than: you’re working on a script or a word document or whatever and suddenly this box pops up.&amp;nbsp;&lt;br /&gt;It’s getting in your flow. If I can put that distraction off to later in a way that provides you the same level of protection - make it human-centric; it’s a very simple example. Also, start looking at risk-adaptive rather than risk-static. So one of the announcements we made at the show was: dynamic data protection or risk adaptive protection.&amp;nbsp;&lt;br /&gt;The basic idea there is: in the human world we don’t just let somebody in the door in through our house and then just pay no more attention to them. You keep an eye on what they’re doing and you adapt based on what they’re doing. It’s not: ‘Here are the rules for entering my house’ and, you know, that’s that.&amp;nbsp;&lt;br /&gt;So this idea adapting to user risk so we can better protect the user is really important. You’re not behaving like you, maybe I should be doing protection around that. And it’s not just about maybe you’re bad, it also maybe you’re compromised. I think part of the mindset around this is that you have to change the lens; when you start talking about those kinds of systems it’s all about “the bad user”. No, it’s all about user protection it’s about maybe your machine is compromised by malware and we should take care of that for you. What else -&lt;br /&gt;&lt;strong&gt;AA: &lt;/strong&gt;Unfortunately, I think we’ve got to leave it there based on time. But Dr. Ford this was awesome.&lt;br /&gt;&lt;strong&gt;RF:&lt;/strong&gt; It’s a pleasure.&lt;br /&gt;&lt;strong&gt;AA: &lt;/strong&gt;Now, I’m like a student too. I’m still going to call you Dr. Ford.&amp;nbsp;&lt;br /&gt;&lt;strong&gt;RF: &lt;/strong&gt;Well thank you very much.&amp;nbsp;&lt;br /&gt;&lt;strong&gt;AA:&lt;/strong&gt; Awesome. Appreciate it!&lt;br /&gt;&lt;strong&gt;RF: &lt;/strong&gt;Thanks.&lt;/p&gt;</description>

Cyber Security Dispatch

Andy Anderson & CSD Staff

Human-centric Security - An Interview with Richard Ford Chief Scientist at Forcepoint

JUL 13, 2018-1 MIN
Cyber Security Dispatch

Human-centric Security - An Interview with Richard Ford Chief Scientist at Forcepoint

JUL 13, 2018-1 MIN

Description

<h2>Interview with Richard Ford Chief Scientist at Forcepoint:</h2><h3>Cyber Security Dispatch:</h3><p><strong>Show Notes:</strong><br />In this episode of the Cyber Security Dispatch, we talk with Dr. Richard Ford the Chief Scientist of Forcepoint. Dr. Ford has been in the industry for quite a while and he has seen the industry through the lens of many different job descriptions, which gives him a grounded perspective of the entire business. Through his grounded perspective he talks about the current problems that plague the security space and how some of these problems are the exact same ones that we’ve had 25 years ago; Dr. Ford advises that before people get into complex security concepts such as resilience we ought to nail down these basic problems that have been put off for 25 years. We continue on about how the industry has too high demands in expecting the entire population to think in security-oriented manner. Rather we should be trying to move toward security systems that accommodate humans habits rather than the other way around. We end on what human-centric implementations of security look like and even hear an example from Dr. Ford.</p><p><strong>Key Points From This Episode:</strong></p><ul><li><p>Richard’s path in security that lead to his current perspective of cyber-security</p></li><li><p>How we should really be defining “resilience”?</p></li><li><p>Why, in order to move on in security, a lot of us still need to master the basics</p></li><li><p>What are the origins of ransomware and are the problems we deal with nowadays really new?</p></li><li><p>Is the security space ready for things like resiliency?</p></li><li><p>The large execution gap between those who do security well and those who don’t</p></li><li><p>What can cyber-security can learn from car safety features?</p></li><li><p>How to take on human-centric focus in<strong> </strong>cyber-security</p></li><li><p>Top five ways to look at cyber-security from a “safety” perspective</p></li><li><p>Human-centric security implementations created by Forcepoint</p></li><li><p>Richard’s experiment of how students react to security warnings</p></li></ul><p><strong>Links Mentioned in Today’s Episode:</strong><br />Dr. Richard Ford —&nbsp;<a href="https://www.forcepoint.com/company/biographies/dr-richard-ford-0">https://www.forcepoint.com/company/biographies/dr-richard-ford-0</a><br />Dr. Richard Ford LinkedIn —&nbsp;<a href="https://www.linkedin.com/in/dr-ford/">https://www.linkedin.com/in/dr-ford/</a><br />RSA —&nbsp;<a href="https://www.rsaconference.com/">https://www.rsaconference.com/</a><br />Forcepoint —&nbsp;<a href="https://www.forcepoint.com/">https://www.forcepoint.com/</a><br />Virus Bulletin —&nbsp;<a href="https://www.virusbulletin.com/">https://www.virusbulletin.com/</a><br />Sapir–Whorf Hypothesis —&nbsp;<a href="https://en.wikipedia.org/wiki/Linguistic_relativity">https://en.wikipedia.org/wiki/Linguistic_relativity</a><br />Morris Worm —&nbsp;<a href="https://en.wikipedia.org/wiki/Morris_worm">https://en.wikipedia.org/wiki/Morris_worm</a></p><p><strong>Introduction:</strong><br />Welcome to another edition of Cyber Security Dispatch, this is your host Andy Anderson. In this episode, Human-centric Security, we talk with Richard Ford, Chief Scientist of Forcepoint. In this episode, we talk about what human-centric security is and why we should move towards it’s implementation. We also talk about back to basics and why it is a necessary movement. Here’s Dr. Richard Ford.</p><p><strong>TRANSCRIPT</strong><br /><strong>Andy Anderson:</strong> Just introduce yourself for the audience that don’t know you - they should but maybe not yet.<br /><strong>Richard Ford:</strong> Sure my name is Dr. Richard Ford. I feel like I’ve been doing security forever; on my RSA badge they even gave me a little tag which says “seasoned” - and I’m not really sure how to take that. But I got into security around I don’t know 89-90 and that’s been my whole life - it’s been a lot of fun. A little bit of time on the offensive side of the house, a lot of time on the defensive side of the house.&nbsp;<br />This is really my passion and I do this because I love it. So I’m the Chief Scientist at Forcepoint, and in that role I’m steering technology across the whole company. It’s a blast because I get to do the sort of fun part of research and then sort of hand it off to somebody else to implement and we all know it’s that last 20% that’s the hard bit.<br /><strong>AA:</strong> I was talking with somebody last night and they were said: “My job is to create the dream and then its this dude’s to” and he points to his friend “make it happen.” You know? I wanted to be the first guy - I don’t wanna be the second<br /><strong>RF: </strong>Absolutely correct that last 20% to make it operational that’s the hard part.<br /><strong>AA:</strong> But - I mean you’ve been in security for a long time, but not always on the vendors’ side right? You were a professor for a while, you’ve been a journalist as well. So talk about kind of some of those roles and the different perspective that kind of gives you.<br /><strong>RF:</strong> Yeah actually - I really liked the way you phrased the question because it really is about a different perspective. So my first job in the security industry was pulling apart viruses apart as a journalist - it was great. I was still a student and they’re like “We’ll give you X pounds for every virus you disassemble.” So I thought ‘This is free money this is great’ - I do this for free. So I went from there into being a journalist really - being the second editor of Virus Bulletin, which is a great publication still in business today.&nbsp;<br />That experience of actually working on that side of the table was probably some of the valuable time that I’ve had in the security industry. Because it taught me how to write, but it also taught me to figure out: always put the user first. Right? You take the user perspective - you don’t take the vendor perspective in that role = that’s really important.&nbsp;<br />And, yeah, it’s been a long and varied career and some of the high points for me - working at IBM Research - is a time in my life that I’ll never forget; IBM Research is an awesome place to go and hang out. It was like being a kid in a candy store - we’d site around and you’d just bump into people in the corridor and find out that they are working on the coolest thing that you could imagine - that was a fantastic role for me.&nbsp;<br />And, yeah, you know we hit it fairly well at one point and I retired into academia. I spent all of these years in commercial - started off a journalist - went into commercial and the research side of the house. And then I moved into academia and that was a very rewarding time in my life. I’m still in touch with so many of my students - if any of my students see this: I’m easy to find on LinkedIn students, you know, I probably still remember you. I think that they’ve left more of an impression on me than I have on them.&nbsp;<br />Eventually it was one of my former students who called me and said “Dr. Ford how would you like to be Chief Scientist at this company we’re standing up.” I said “Sure, Brian, but you’re going to have to stop calling me Dr. Ford” And he was like “Okay, Dr. Ford, I will.” So that’s sort of how I ended up at Forcepoint and the reason that that was attractive was that Forcepoint was trying to things a little bit differently. So I mean you’ve been around the show floor a lot and I mean this with no disrespect to the folks that are down there but it’s pretty samey, right? In some ways. There’s a lot of buzz in security and there’s clearly a lot of money flowing around but it’s not very well differentiated - everyone is going to stop you and go “We can solve you blah-blah-blah problem.” So we’ve got a universe full of point products and that’s not very excited to me - I went into academia because I wanted think deep thoughts. And the reason that Forcepoint was a fit for me was the we were going to shake things up a little bit so that’s been kind of fun.&nbsp;<br /><strong>AA:</strong> Yeah I mean I think that there’s a number of reasons that you’ve got all these different solutions but I’d love to talk about what is - even if it’s not been realized in terms of actual usable products that are well-known and utilized throughout the space - it does sound like there are some interesting things - particularly coming out the academic, some the defense community - in terms of, particularly, like the cyber resiliency is an area that I am excited about since it does seem like that is talking, at least thematically, strategically, about doing things differently. Right?<br /><strong>RF: </strong>Yeah so I think that resilience is a really important topic and it has been woefully underexplored, right? Especially in the commercial world; there are a few vendors that have been wandering around the show floor talking about resilience.&nbsp;<br />But often the way that we talk about it is not well-formed, since we don’t define the word very well. So often you’ll start to you’ll start talking about something else - talking about resilience. And what they’re actually talking about is robustness, so if something is very strong - you can’t bend it - you can’t move it - that’s robust - you can’t break it. But if something is like a blade of grass - you tread on it, you take &nbsp;your foot up, and it springs back up - that process of recovery, of coming back - you can bend me but I don’t break; I spring back to shape - that’s resilience. And I think that one of the challenges in this industry is that we are very sloppy in how we use our words - this is something that I was a pain at to my students.&nbsp;<br /><strong>AA: </strong>Well you’re an Oxford grad. You have the OED right - Oxford English Dictionary. You care about words, right?<br /><strong>RF:</strong> I do care about words, because I think that it’s also the Sapphire-Whorf hypothesis, which says the words that you shape - words are thought it’s basically. I mean, linguistically relative to the - am I totally sold on that? Not necessarily, but there’s some truth to it. If you can’t express it in words there’s a chance that you don’t think about it cleanly, because words are the language of thought. And so, yeah, words really matter.&nbsp;<br />And so being really crisp around the words that you use, so you and I can communicate, is really important. So, yes, resilience is interesting but only when we talk about it in the context of: I took a hit, I got hit, and then I came back up; that’s resilience, and that’s interesting.<br /><strong>AA: </strong>Yeah. I just use the word hard-to-kill, right? With the three words, yeah.<br /><strong>RF:</strong> Yeah I like that. “Hard-to-kill”; It’s visceral.&nbsp;<br /><strong>AA:</strong> Cause it’s not only the - it’s about recovery but it’s also I think - and that makes sense and it makes me think of cockroaches and rats, right? And it’s not just one right. One you can step on it, right? But there’s also millions of them, right? And so that - dynamism, diversity, all of these other things that fall under resiliency. Where are you seeing that - I know academically it’s been talked about a lot - but where are you starting to see maybe resiliency begin to form in the sort of -?<br /><strong>RF:</strong> Right, so of course, in defense world there’s been considerable interest in resilience - the idea of, yeah, you know you survive. The system survives, possibly in a degraded state, but it will come back up. There’s also been very good academic research. The challenge of moving resiliency, sometimes, into the real world or the commercial world is: we’re still messing up the basics. Right?&nbsp;<br />I mean we’ve still got people with bad passwords out there. We’ve still got bad password reset policies. We’ve still got companies that will send emails saying “Your password will expire in seven days. Please click on this link to reset it.” And it’s a real email - it’s not a phishing attack. So resiliency is important, but the problem is that you’ve got this massive skill difference between the agencies that do cyber extremely well and the agencies that just make the most basic mistakes. And some of these problems that we’ve been finding today have been around forever. &nbsp;First piece of ransomware - when was it? Take a guess. Is it a new problem? I mean, seems?<br /><strong>AA: </strong>I don’t know; like Morris Worm era?<br /><strong>RF:</strong> Yeah. You’re exactly right. We’re going back to-<br /><strong>AA:</strong> Like early 90’s or-<br /><strong>RF:</strong> AIDS Trojan-[inaudible]-gap, which was a hand-mailed, snail-mail piece of malware that was on a disk-<br /><strong>AA:</strong> Like with your AoL disks, right? Those CDs?<br /><strong>RF: </strong>No, no. They actually sent you a disk. And it was sent around to I think - now I’m going from fallible memory - I think it was about 5,000 was the first round. And what it would do was somewhere in the U-lay it said if you don’t pay money we will make your system unusable or words to that effect. And we’re still dealing with that problem. Wanna-<br /><strong>AA: </strong>It’s the newest thing that’s come out.<br /><strong>RF: </strong>-WannaCry was exactly that problem right? It just didn’t use five and a quarter inch disks. It was a little bit faster.&nbsp;<br />But it’s exactly the same problem set so if we’re still fighting the battles that we were fighting - what almost 30 years ago, 25 years ago - my question to you is: is this community really ready to step into the complexity that resiliency brings? Because, you pay for resiliency - if you study biological systems you will see that the most diverse system is usually the most resilient - that diversity of species leads you to resiliency. And so there is a real challenge there because what is another word for diverse when we are talking about differences - it’s also a word for complexity. So when you have systems that can sort of move and adapt, those are potentially more complex systems and currently we’re dealing with a world where people are still getting nailed by 25 year old style attacks.&nbsp;<br />So there is a tension that I don’t think we’re honest enough about in the industry to say look there is a huge difference in capability between end and the bottom end - and I say bottom end with no disrespect to people or organizations that I would put at that bottom end of security; because they shouldn’t have to care about security. Right?&nbsp;</p> <figure > <blockquote data-animation-role="quote" data-animation-override> <span>&#147;</span>I think this idea that suddenly everybody has to be part of the security solution - to me that doesn’t speak to human nature. I think that we have to build much more human-centric systems - systems that actually accommodate how you and I actually work rather than going: ‘Well Richard is going to be completely logical at all times. He will step into the security world.’<span>&#148;</span> </blockquote> <figcaption class="source">&mdash; Richard Ford</figcaption> </figure> <p>When you drive your car sit there and go “Huh I know exactly how the timing chain is working or the ignition.” You don’t think about the mechanics of it you just drive your car. &nbsp;And so I think this idea that suddenly everybody has to be part of the security solution - to me that doesn’t speak to human nature. I think that we have to build much more human-centric systems - systems that actually accommodate how you and I actually work rather than going: “Well Richard is going to be completely logical at all times. He will step into the security world.”<br /><strong>AA:</strong> Right. I mean it’s sort of bananas that we talking that humans are fallible - this is news to anyone? I think that the car analogy is one that is very good. The experience of - I mean humans are intricately involved in the driving of cars - although maybe not as much heading forward - but the systems around them got much better in terms of helping that person stay safe and when they make mistakes there’s airbags and seatbelts and all of those sorts of things. And I’m not seeing that tolerance for mistakes in the security space as much-<br /><strong>RF:</strong> Ah and it’s the word, right? If we change that word from security to a much nicer word, to me, which is “safety”, you start to design things in a different way. You don’t design a car going “I will make my car secure” - although, having seen some of the talks, maybe we should be doing more of that. You design a car going “How do I make is safe”: how do I accommodate the real fallible human nature of people like: how people get distracted when they drive, so maybe I’ll make the steering wheel rumble when they get to the edge of the lane.&nbsp;<br />That’s designing for safety rather than designing for security, and I like the difference in mindset very much, because I think that when you switch to a safety mindset you become more human. You go: “What’s the human really going to do?” Rather than saying “ You must be sitting and thinking about security at all times,” because guess what - you don’t. And you shouldn’t have to right? Security is a means to an end.<br /><strong>AA:</strong> So you’ve been in this a long time thinking about it deeply on all kinds of levels when you see people starting to think from that cyber safety perspective, what are they doing? What are the of top three, top five things that if you’re thinking from a safety mindset they’re doing?<br /><strong>RF:</strong> Well the first thing is that you recognize that people are people, and - blunty - the next four things are all: recognize that people are people.&nbsp;<br /><strong>AA: </strong>Look back at number one, right?<br /><strong>RF:</strong> Yeah exactly. That is the key to a safety-based system is that: you should make the default safe; you should make the default usable for what the person is trying to accomplish too. So if the car wouldn’t start because it’s “so safe” that’s not very helpful. So it’s really this very human-centered design that looks at how you and I will naturally operate those machines, and how we can use that as a way of accomplishing a task.&nbsp;<br />Remember, again - we’ve talked about this - security is a means to an end; it is not an end in and of itself. I don’t do security because I’m doing security; I’m doing security because I’m trying to keep my people and my data safe. And again there is a lot of that security culture - it’s like the cult of security, where security becomes “the thing”. And I’d like to remind people that security is a way of getting something done.&nbsp;<br />You don’t sit down at your computer to do security, typically, unless you’re one of maybe one of ten people in an organization. What you do is you sit down at a computer to do business and security is an enabler to that business but it’s not your primary focus.<br /><strong>AA: </strong>Yeah. I think that I’ve heard it a lot kind of talked as a measure of quality just the way we measure other things security particularly in the software development world. Security should just the way it does next to performance or the ability to do the tasks, right? Security should be right there in the mindset and I think that’s true in every one of - sort of pulling security into the business, right? The business should really be the owner of the security.&nbsp;<br /><strong>RF:</strong> Right because security is a business function because it is a business enabler, and what do is it not remove those risks, because business is risk. That’s what we do; we go out and sell a product on the market, that’s risk; you get behind the wheel of your car, that’s risk. You accept it and what you do is you mitigate that risk.&nbsp;<br />So we sometimes get into this “We’re going to make it completely secure. We’re going to make it completely safe.” - Nah it’s all risk. It’s about managing risk in an intelligent way to let you do what you want to do.<br /><strong>AA:</strong> So, you know, just to bring it down to a concrete level, on this show we kind of like to talk about things that we likes and things that we thought were cool. Where have you encountered like “Hey somebody was really using that mindset and they come up with something that made me thing ‘That’s a cool way to kind of think about safety’ or human-centric kind of - ”<br /><strong>RF: </strong>Right. So I mean, of course, us. Right, I mean that’s the reason that I’m here. But-<br /><strong>AA:</strong> Well how do you do it then. What are the ways that you do it in your own product.&nbsp;<br /><strong>RF:</strong> Sure so a lot the design work that we do starts from the user interface. It starts with: ‘how is this product going to engage with the user’.&nbsp;<br />&nbsp;So one the pieces of research that I did with a colleague when I was back at university - so we bought an eye tracker and eye trackers are so much fun, right? So it’s a little dual camera that sits at the bottom of the screen and it will show you exactly where the eyes are looking on the screen. What we did was we got a bunch of students - and I think we paid them in pizza which is like the universal student currency - what we did was we had a little competition.&nbsp;<br />They had to complete some tasks with a timer how fast and how accurately you can do the tasks, and that wasn’t the experiment. The experiment was, half-way through, some security warnings came up from the machine and we could see exactly how long they looked at the warning and what they were looking for and what they were looking for was the cancel button.<br />They didn’t generally read what the warning was - like “Let me get rid of this annoying box so I can keep on with my task” right? “Is there a close button?” No, really. It went to the top right; it was a scary piece of research. What you have to do is design your interactions with a security product with the human in mind. So here’s an example. If I’m going to interrupt you in a task with a security warning; if it’s: ‘Your Flash is out-of-date. It needs updating,” let’s say. If you’re not on the web, wait until you switch tasks it's a much more human way of doing it rather than: you’re working on a script or a word document or whatever and suddenly this box pops up.&nbsp;<br />It’s getting in your flow. If I can put that distraction off to later in a way that provides you the same level of protection - make it human-centric; it’s a very simple example. Also, start looking at risk-adaptive rather than risk-static. So one of the announcements we made at the show was: dynamic data protection or risk adaptive protection.&nbsp;<br />The basic idea there is: in the human world we don’t just let somebody in the door in through our house and then just pay no more attention to them. You keep an eye on what they’re doing and you adapt based on what they’re doing. It’s not: ‘Here are the rules for entering my house’ and, you know, that’s that.&nbsp;<br />So this idea adapting to user risk so we can better protect the user is really important. You’re not behaving like you, maybe I should be doing protection around that. And it’s not just about maybe you’re bad, it also maybe you’re compromised. I think part of the mindset around this is that you have to change the lens; when you start talking about those kinds of systems it’s all about “the bad user”. No, it’s all about user protection it’s about maybe your machine is compromised by malware and we should take care of that for you. What else -<br /><strong>AA: </strong>Unfortunately, I think we’ve got to leave it there based on time. But Dr. Ford this was awesome.<br /><strong>RF:</strong> It’s a pleasure.<br /><strong>AA: </strong>Now, I’m like a student too. I’m still going to call you Dr. Ford.&nbsp;<br /><strong>RF: </strong>Well thank you very much.&nbsp;<br /><strong>AA:</strong> Awesome. Appreciate it!<br /><strong>RF: </strong>Thanks.</p>