Preparing IR for AI Incidents with Gerard Johansen
E8

Preparing IR for AI Incidents with Gerard Johansen

Jason Blanchard:

Hello, everybody. Welcome to today's Antisyphon Training Anti-cast with Jared Johansson. We're gonna talk about preparing for IR for AI incidents. Like, when it first appeared, I was like, okay. I gotta know what acronym stand for.

Jason Blanchard:

And I don't know if that's an initialism or an acronym because if you like IR instead of and then. Right? So thank you so much for joining us today for preparing for IR for AI incidents. If this is your first time attending an Antisyphon Anti-Cast, well, hello, welcome. We do these pretty much every single week and Jerry's gonna share his knowledge.

Jason Blanchard:

And one of the reasons that we do it is because you're not quite sure if you ever wanna take a training class with us in the future, and so this is the way for you to do it for absolutely free, see if you like the person, like the content, and then decide for yourself, is this something that I wanna do? Now anytime during the webcast, you can ask questions in Zoom or on Discord. We'd love for you to join Discord because, you know, Discord's for forever. This webcast is for an hour, and so that's a great place for you to meet the community and and continue to ask questions. Jerry, are you ready?

Gerard Johansen:

I'm ready. Let's do this thing.

Jason Blanchard:

First of all, Jerry's one of the nicest people I've ever met, knowledgeable, really good at what he does, and also lives in one of the the most beautiful places in all of The United States. And I'm not gonna say where. I'm not gonna dox him in front of all of you. But, Jerry, it's so good to see you. It was great seeing you.

Gerard Johansen:

Welcome to the past.

Jason Blanchard:

Alright, Jared. It's all yours.

Gerard Johansen:

Alright. Well, thanks everybody. Thanks. If you this is your first time where I've had a chance to talk to you, thank you very much. So what you know, AI is in the news.

Gerard Johansen:

Apparently, it's a it's a thing. Really the Genesis of of kind of what we're going to be talking about really the next forty five fifty minutes. I left a lot of time for discussion and questions is. I've had a lot of of churn with customers in terms of not now the typical churn, but hey, we're doing this with AI or we didn't know we were doing this with AI. What do we need to do if something happens?

Gerard Johansen:

Kind of what's happening from an intel's perspective? How the how the threat actors using this? All of that stuff. That's what we're really gonna talk about today is is really kinda giving you some concrete, hopefully at the end, some concrete steps to really start even just having the initial conversations about the the potentialities, the what ifs that's going on right now. During the pre show banter, we talked about introducing our intro slide.

Gerard Johansen:

This is my intro slide. Feel free to hit me up on LinkedIn. You can look at my profile there and get a sense of who I am and you know where where I'm at actually. So don't worry Jason. You didn't docs me.

Gerard Johansen:

I kept a. Picture of my dog in here because at Wild West Denver this year multiple people ask me. Hey, where's your dog? It's like, yeah, it's not, hey, Jerry. Good to see you.

Gerard Johansen:

Glad you made it. And I was like, where's your dog? But yeah, feel free to hit me up. But you know, background is up there. I think the most important thing is here's what we're gonna talk about as opposed to, just kinda going through, let's look at this as a prompt.

Gerard Johansen:

You're an incident manager in your organization. You could be full time. Maybe you're in a large organization and they have a head of incident management. Or maybe you're just unlucky, you're the security operations manager or the senior SOC analyst and you get the finger pointed at you like, hey, you're in charge if an incident kicks off. You're gonna have to work with legal, you're gonna have to work with all of the different organizations within us to manage this.

Gerard Johansen:

So with that being said, hey, how are we going to handle AI incidents? Okay. Well, let's talk about that. That's what we're going to be really diving in today. So we're going to talk about what are our own internal guardrails for our discussion.

Gerard Johansen:

What do we have from challenges perspective? And obviously, there's a mountain of these. And then assets perspective. So what do we have in the assets column that's gonna help us? Here's one of the other things we're gonna deep dive a little bit is, hey, what is an AI incident?

Gerard Johansen:

Is it, you know, we have a chatbot that hallucinates or are we talking somebody is using worm GPT to attack us? That's gonna color a lot of our discussion. A lot of our pre planning is really understanding what is the threat landscape looks like. We're gonna do a key assumptions check and then we're gonna get directly into the planning discussion which is gonna be about ten, fifteen, twenty minutes of of really kind of looking at some concrete things you can do to really highlight some of the risks out there, the threats, the vulnerabilities. Left a lot of time for questions and discussions and then we'll get into some resources that I used here.

Gerard Johansen:

Here's our guardrails. This is constantly shifting. I had to rework slides more than once less than a 100 times in the last couple of weeks of of putting everything into into thoughts here. So things are changing. It's constantly shifting.

Gerard Johansen:

We're just not gonna get away from that. I am not an AI expert at all. I understand specific things but my expertise lies in, hey, how do we get prepped for an eventual or potential crisis that's gonna be AI driven? So we're not gonna get into a lot of the technical stuff about AI, but we're gonna look at some of the threats risks out there and how do we work that. We're gonna be focusing on operational and strategic measures.

Gerard Johansen:

That's really what our meat is. This is going to really hammer and put into place a solid foundation. If the converse where you jump right into kind of technical. How would we investigate an AI based incident? How would we pivot into maybe getting some forensic artifacts without some sort of operational and strat foundation?

Gerard Johansen:

You're just gonna fall flat. It's just gonna go nowhere. You're not gonna know how to operationalize, say, this prompt buried in an email footer. So that's why we're focused a lot on the strategic and operational measures. If I get invited back, if I do a good job and I get invited back, probably my my next would be looking at some of the things from an investigator perspective.

Gerard Johansen:

This is often driven, as I said, with customers and clients and friends that are saying, hey, how do we do this? How do we work in this? You know, build this out and and really get ready for this. So a lot of this is just, this is what I've done in the past and it seems to work. It seems to at least get us that solid foundation.

Gerard Johansen:

Questions are encouraged, drop them in Discord, we'll probably have some time at the end for discussion. Calm is contagious and so is panic. I say this constantly when I do any type of tabletop exercise, incident management work, working with teams, calm is contagious and so is panic. Even though it's AI and there's still a lot of black box, dark areas that we don't understand, Game on, you need to be calm. So we say that all the time.

Gerard Johansen:

Here's where we have challenges in building out a readiness. Building out that capability to respond to an incident. I talked about what AI incidents could be, and here's kind of where we are. There's no real concrete definition for what an AI incident is. So if you look at say, OWASP, OWASP published some guidance around essentially incidents that involve your hosting some sort of model or your hosting some sort of infrastructure that users are interfacing with or potentially customers are interfacing with.

Gerard Johansen:

But it really doesn't dive into, hey, what are some of the issues that we have with threat actors using flawed code? Or using MCP or or some sort of infrastructure to keep their their code constantly shifting. Hype. Here's the other thing. There is a lot of noise to signal right now.

Gerard Johansen:

Everybody is out there selling a service, out there selling their own models, out there selling whatever the case may be. So we need to understand that there's a lot of hype, so there's a lot of noise to signal there. Rapid adoption cycle is one of the big things here. Is we're gonna discuss what I call internally generated AI incidents. Think of just I think it's now called OpenClaw.

Gerard Johansen:

It went through three iterations. But think about just that within thirty days and how much it's going through the roof. Look at even just some of the repos on GitHub and just how fast they're like, oh, it's reached half a million stars in a month. So this is one of the things that we're we're really struggling with. So understand that.

Gerard Johansen:

Threat actor adoption, there's a low barrier to entry. We're gonna talk about that. And then and then governance, risk, and compliance. Alright? That's how do you how do you comply with legal regulations that are coming down.

Gerard Johansen:

Back to that hype cycle. This is where we are. Gartner published this just recently last year. So, yeah, within the last year. As we see, we've got kind of a lot of of blue in that peak of inflated expectations.

Gerard Johansen:

So even AI agents, engineering responsible AI that's kind of sad that it's at the peak of inflated expectation. But as we see, we're starting to go probably gonna see some of these things move into that trough of disillusionment. What does that mean? It means a lot of that fear, uncertainty, and doubt, and hype probably is gonna die off. For our purposes, planning this, going to make sure that we understand where these things fit because we have to plan for their use, whether it's internal or threat actor use.

Gerard Johansen:

So just understand, this is kind of where we sit in that hype cycle. Here's where we have some wins though. And it's not all doom and gloom. I hate I hate bad news. There is good news.

Gerard Johansen:

There are serious people that are at work on this. People here, people in other organizations that may not have been able to attend this. There are serious people that are out there making their thoughts known and available to all. It's one of the one of the unique features of this. When we go back in time a little bit, this is one of those that that really there a lot of people are making things available.

Gerard Johansen:

So solutions are coming. I would even say solutions partially are here. So we have you can't swing, pardon the pardon the saying, a dead cat without hitting an MDR firm, XDR, EDR, cloud detection. I'm gonna call out, not in a bad way but good way, Datadog, Wiz. You know, even so CrowdStrike, all of the different MDR, XDR are really working hard to automate detection and start moving us faster and faster and faster.

Gerard Johansen:

So, obviously, there's a premium to that and you got a budget. So we're not alone. And here's one of the things also to kinda put this into historical perspective. We didn't just start here last year, two years ago, three years ago. We heard this song before and that's where I wanna kinda give us a history lesson.

Gerard Johansen:

We've had you look at incident response and moving faster really over the last forty years. If you go back to 1986, Cliff Stole and the Cuckoo's Egg, a really good book, really good white paper, really was that kind of first watershed moment on intrusions. We move into the morris worm, which is, hey, we need to establish some sort of coordination. It was nearly twenty five, twenty six, twenty seven years ago, the I love you virus. Alright?

Gerard Johansen:

1999, the very beginnings of the Internet. But it really focused on email containment and containing those types of of malware outbreaks. Stuxnet. Okay? Stuxnet was the game changer and we have weaponized malware for a specific target.

Gerard Johansen:

Whether whether you agree with its use or not, it it is what it is. Here's one of the things that that has really changed really in the last twelve, thirteen years is the APT one report came out. I was working in the private sector at this point. I transitioned out of government. And what we had is this highly capable nation state actors we had to be prepared for.

Gerard Johansen:

From a technical perspective, from an operational perspective, is our containment focused on understanding the threat actor very quickly. So we had to understand what they were doing. Maybe take some time, but we really didn't rush containment the whole the whole kind of zeitgeist around containment was, hey, we we wanna know what they're doing, how they're doing it because they've set multiple command and control avenues out. We wanna understand what they're doing so we may take a week to do a full blown investigation and then expel them. Get into how that's changed.

Gerard Johansen:

The Sony hack, massive data theft wiper, we're starting to see this into the rise of ransomware. So the major changes that we've had is think about this. Cliff Stall took months to figure out what was going on. And then we start to very quickly rapidly get to 2025, the Anthropic Report. The main takeaway from here is threat actors are moving no longer at human speed but machine speed.

Gerard Johansen:

So that means our activities need to keep pace with that. So that is one of the key takeaways here, is when we're talking about incident response planning, we need to understand that that maybe eight hour or four hour window that we use to operate with ransomware is very quickly down to about minutes to maybe an hour in terms of how fast threat actors can move. I talked about classification and a classification model. This is something that I had nana banana banana. Somebody's gotta drop a a one of the the the minions in there.

Gerard Johansen:

This is kind of how there we go. Thank you. This is how we're we're gonna approach looking at specific incident types. And I basically broke it out into three broad categories. I'm sure with a little time we could probably start to do a little bit more class better classification on this.

Gerard Johansen:

We're gonna talk about target, threat actor use and internally generated. Alright. So think of internally generated as you're doing something which you believe is legitimate or a phishing type of situation where it's attacking kind of the internally hosted used AI and it causes some sort of of of problem. Threat actor use, self explanatory, and a target. What about our externally facing AI infrastructure?

Gerard Johansen:

So we're gonna talk a lot about these as we move forward because it gets into, very importantly, how we're going to classify specific types of incidents and align our response actions to them. We cannot take a one size fits all approach because we are maybe dealing with a targeted intrusion or a targeted attack against, say, our externally facing chatbot with a prompt injection attack, our response is gonna be completely different than Dave in in development ops or in engineering who downloads a malicious LLM from Hugging Face and deploys it into production. So all things to to to consider there. So threat actor use, as an aside, this is what Gemini thinks a worm looks like. An AI powered worm, I don't know worms had teeth, but it is what it is.

Gerard Johansen:

This is supposed to be a representation of worm GPT. It's basically a GPT for the threat actors, the bad side of the house. In a threat actor use, this is where we see all of this enablement with Claude code, creating our scripts, using MCP servers to go out and set that infrastructure up so that their malware and their scripts can change and pivot and do all of the all of the what we used to see from humans, they're moving that capability in. Here's one two, rapid identification of vulnerabilities. We're now seeing a CVE gets published and then we have 15 before we start to see an exploit out there.

Gerard Johansen:

This is a sea change of what we're looking at. So here's one of the things to think about. Are we going to look at a vulnerability now, a critical, as an incident? Just given that we have we have minutes now to get that to get that addressed. So something to think about.

Gerard Johansen:

We'll talk about kind of that when we start talking into real operational planning. Most recently, I said there's been the attack against the Mexican government. Alright. So they used a jailbroken, clawed chatbot and identified and exploited vulnerabilities. Again, we're really trying to figure out how would we classify this as an incident.

Gerard Johansen:

How would we go about addressing this? This is the hard part. What if Fortinet next bullet down. What if Fortinet came back and said, hey. We're seeing devices just getting getting destroyed over and over and over again, basically on weak passwords and authentication mechanisms.

Gerard Johansen:

Again, think about that. One of the key takeaways I highly recommend just reading it through is the Anthropic Analysis of GTG 10 o two. This is the one that I wanna talk about and really hammer. That 89% of operational tasks are being handled by autonomous agents. Again, machine speed.

Gerard Johansen:

So this is gonna factor into how we approach shutting down the network, containment operations. Maybe even just where they're moving so fast, they're living forensic artifacts all over the network, but we are not gonna be able to keep up. So this is kinda some of the things that to think about. We're still in kind of an a Achilleschaney kind of way, but again, machine speed. Here's the other thing.

Gerard Johansen:

Address vulnerabilities as rapidly as possible. I have started to recommend to anybody that will listen, is if you see a critical for something that you have that is exploitable, I would actually consider that an escalation into whether you call it an incident, whether you call it an event, a crisis, whatever you want, but you need to start working in that crisis modality. This whole notion of, oh, we'll have seventy two, maybe, you know, sixty hours, maybe a week to get those criticals taken off of our off our risk register. It's like, no. Just assume that if you've seen it and you went and got a cup of coffee and came back, there's an exploit for it.

Gerard Johansen:

Also understand we're gonna have to work in a in a an environment where AI powered malware may just go right past detective controls. So here's the here's the worst part about this, what I would say is we're now working with AI kitties versus script kitties where we have a barrier entry is extremely low. Alright. Somebody like myself, I could probably do some some some damage and I'm and I have no chops in writing writing malware or writing exploits. So just think about that.

Gerard Johansen:

If somebody as newbie as I can could probably get into this, that's what we have. And these are commonly available and cheap to use. Here's the next category is this AI targeting. This is basically what we would say indirect direct prompt injection, getting into supply chain, using an organization's AI against it. The big takeaway from here is how we manage non human identities.

Gerard Johansen:

This has been a sticky widget in the IR field for a while because of cloud adoption. And if you've got AWS or Azure and you got identities or principles, all of that, We're just compounding it. Just gonna make it worse for you. Now you've got all of these non human identity targets that are out there. I'm gonna skip to the bottom.

Gerard Johansen:

That's that AI API keys that we're seeing from the Google Cloud LLM jacking. Alright? Here's the other thing, Copilot. This Echo link where we've got hidden instructions. Co Pilot is is a huge target.

Gerard Johansen:

Alright? My favorite is the the Chevrolet of Watsonville chatbot where somebody was able to get in and and get it to give us a Chevy Tahoe for a dollar. Eventually that'll catch up to you but you know, some of those things that that we need to think about here. Hosting this stuff and bringing this stuff even if it is in house as it has increased your attack surface. I don't think I could be a a good, you know, cybersecurity professional, a defender to say, you need a you need an asset inventory.

Gerard Johansen:

You need an app inventory. Well, yeah, I understand. But things happen and oftentimes we don't have that. But just realize that as this stuff is brought in, you're you're increasing your attack surface. Non human identities.

Gerard Johansen:

Treating that very similar to the way let's take a a use case we see in incident response all the time. We have a threat actor moving laterally using any number of techniques, Kerberoshing, Golden Ticket, Silver Ticket, even just compromising an administrator credential on using SMB and RDP. Very similar, but now we've increased, again that attack surface. So have some sort of discussion about, hey, if we gotta kill this or we're gonna rotate these, what are we gonna break? Here's one of the other challenges is that implicit trust.

Gerard Johansen:

Right? I know we're all sitting here going, yeah, LLMs will spit out hallucinations. But if I'm tying one automated process with another, I'm essentially setting that up. What happens if I break that? What happens if implicit if if the LLM output can't be trusted?

Gerard Johansen:

What are we breaking? All all all things that you're gonna have to prepare for. And here's my favorite one. This internally generated AI incident. This is where your team decides, hey, let's fire up OpenClaw.

Gerard Johansen:

Or hey, we're gonna get clogged code and we're gonna tie it into Notion and we're gonna allow it to go through all of our SharePoint infrastructure and call out all of the the different informations that we need or Versus Code integrations. This is where that Shadow AI comes into play. Acknowledge, here's here's kinda where where we sit with us from a responder perspective. Recognize that shadow AI exists. Recognize that all you need is a credit card to get a lot of these tools coming in.

Gerard Johansen:

And devs, engineers, people in payroll, whatever the case may be, we're looking at data loss, potentially resource exhaustion. Alright. We've heard horror stories about people that set up infrastructure and they get a bill three or four months later for several thousand or a $100,000 just on on all of the stuff that they were doing. So an internally generated AI incident, think of this as you lose your laptop but worse. Or you carelessly put all of your confidential data into a drive out there on the Internet, a data lake in Azure or something that's publicly available.

Gerard Johansen:

So we see this here's another one. Malicious code hidden inside a open source LLM on Hugging Face. Real important. We have very little visibility into that. You wouldn't see that until you it was too late.

Gerard Johansen:

The point your clog code or some sort of of tool to a proprietary database or confidential data source and upload that. Alright. Vibes coding vulnerabilities. Alright. There's a lot of data estimates out there.

Gerard Johansen:

They range, but you may be introducing a vulnerability in there. We also have the Moldbot exposure of 1,500,000 API authentication tokens. Just something to understand that this is potentially the Vector for a an incident. So again users just like email just like chat just like everything else users are part of this Vector and they can they can expose you very quickly to some of this risk some of these threats just with the credit card and our GRC has not kept pace with this because they would have to make determinations. Shadow eye AI, that's a blind spot for us.

Gerard Johansen:

Really really important to understand is that we can't really adequately plan for an incident if we don't have an idea of what the potential vector is gonna be. Meaning, it's one thing to say, we're gonna prep for for phishing attacks. It's another to be, we're gonna look at phishing attacks from email. We're gonna look at phishing attacks from QR codes, malicious QR codes or whatever the case may be. It's the same issue that we have here.

Gerard Johansen:

Alright. Here's the other thing though, users are a vulnerability but they also are a key to detection and response. They will probably help us to understand what are some of the issues. Hey, I put this all up in here and I'm starting to get some weird you know, I put all of these documents that were were open source up into an LLM and it's starting to spit out weird stuff. Yeah.

Gerard Johansen:

Who do who do I talk to about that? Alright. Let's get into planning for all this. I give you all this bad news. Alright.

Gerard Johansen:

You're probably down. You're like, oh, what do we do? What do we do? What do we do? Here's what we needed to understand.

Gerard Johansen:

Not all of these as as we just showed, brought this up in three broad categories. The only conf only connecting factor in some of these is AI. It's very different when Shinobi Ghost, the threat actor Shinobi Ghost is using flawed code and using it to write code versus Michelle and Dev, you know, up pointing a untrusted LLM or untrusted tool at our internal code base. So we're gonna get into functional definitions. There's little time to get situated, so a lot of our planning is really gonna focus on recurring reviews of what we're doing.

Gerard Johansen:

This is gonna be a change. Alright. We're gonna talk about what I call a quarterly readiness review. It could be a monthly readiness review. But this notion that GRC comes in and says, hey, we've got an audit.

Gerard Johansen:

Our insurance company, federal regulators, whomever is coming to do an audit, can you just take a look at the incident response plan, make sure it's good, and and we'll just update the date and the version number and we're gonna pass. We can't do that anymore. That's that's gone. Alright? We are going to keep some legacy IR stuff, that instant readiness response, meaning our IR plan, our policies, but we need to get into much more the realities of AI.

Gerard Johansen:

So here's kind of my eight point plan for building out your incident response plan capabilities. So we'll go into each one of these. Here's where I want everybody to maybe take one or two or three of these things and and twenty minutes, thirty minutes, an hour a month to start working in in some of these to really start to to to incorporate into your already existing foundation. So clear and concise definition. This is going to break out those types of of AI vectors.

Gerard Johansen:

OWASP has a really good definition. I've kind of expanded on it. So the OWASP definition of an incident is an AI system's behavior or misbehavior leads to unintended, harmful, or risk elevating outcomes. This is very focused on LLMs and the top 10 that OWASP has come out. I still think it's very, very useful to take it.

Gerard Johansen:

I like to expand, but basically looking at confidentiality, integrity, or availability, where the proximate cause of it, partially or in whole, the use of artificial intelligence. The reason I break it out like that is because if I am showing, I am looking at an attack in real time that is moving faster than a human is able to do. That is what I would consider an AI incident. That is going to call a color how we are going to approach containment. If I'm getting attacked by a polymorphic AI driven worm that can change based on the temperature in the room, the old notion of sit, wait, and contain or get some some forensics data goes out the window.

Gerard Johansen:

I'm a just shut us down, we're gonna clean it up, and we're gonna get back up and running because there is no way my team of responders are gonna be able to hunt this down in the time that it's necessary. Getting a good solid definition feeds into your classification, which is gonna be step two. We covered three types of incidents. We've got the threat actor use, target, and internally generated. So have some sort of incident criteria for proper escalation.

Gerard Johansen:

Let's just say an internally generated incident may just be a low. It means we call in a few people, we pull our dev, let's go to Michelle. Michelle downloads a malicious LLM, it starts doing weird stuff. We go ahead and go through it and we say, hey, that's not a big issue. We'll just clean the system up, you know, talk to Michelle, say, you know, not everything on Hugging Face, it says Hugging Face, but not everything is happy and is gonna do the right thing for you.

Gerard Johansen:

So that might have a criteria that's different than the example I just used. Which is, your SOC analyst comes in and says, we're seeing this move so fast, we think it's an AI generated incident. We're just going to move that up to severity one and we're just going to continue. We're not even going evaluate it. We're going to go off the info we have, contain and move forward with getting rid of it.

Gerard Johansen:

As I said, we we may just go and verify in guardrails and input filters if it's an attack against our own chatbot or something of that nature. But having this criteria goes a long way into making sure that you're not jumping into crisis mode when you don't need to and making sure you do jump into crisis mode when you need to. So a lot of this criteria is built out from there. We'll get how to actually start refining and fine tuning this criteria because it's not static. Anything that we're talking about here, should neglected to say before we started our discussion is we are not static here.

Gerard Johansen:

Everything is gonna be a little bit more fluid fluid, flexible, and dynamic, I think is the best way to put it. So start looking at your existing processes. If you are like this, you have a video diagram of ransomware. How do we handle ransomware? I'm sorry.

Gerard Johansen:

We're not gonna be able to workflow out of an incident, especially an AI incident. Alright? This is what we say human speed is versus machine speed. So look at your existing process, automate where you can. So automate isolation, automate containment, look at your visibility, look at how you're managing credentials, and automate where you can.

Gerard Johansen:

Again, one of the wins we talked about was looking at software, Software as a Service, EDR, XDR, all of that is starting to do a lot of this for us. So leverage that as quickly as possible. Swimlane are quickly becoming a hindrance, not an asset. And I'm gonna just say that this kind of notion that we're gonna do a Visio diagram. It's good for that initial stage to kinda document what you're gonna try to automate.

Gerard Johansen:

But again, I am I am wholeheartedly embracing automation as as where we can and where it makes sense. So if you have 10 or 15 playbooks built around ransomware, you have playbooks built around phishing attacks, those kind of things, those are awesome. Get those converted into some sort of automation Whether you have to use a third party, whether you have to use your own, that's where you really need to start looking. Flexibility is key. Like to say Semper Gumby, always flexible.

Gerard Johansen:

Things Is like open claw are gonna come out or the next iteration. I don't even know what it is. It's gonna have to be something that we do and be flexible with it. And just say, hey, if it doesn't fit in one of these categories, we gotta make some changes. Flexibility is key.

Gerard Johansen:

Alright. So we're reworking our existing process. We need to pull in some additional stakeholders. If you've ever done any incident management training with me, tabletop exercises, whatever, we we pull in external comms, internal comms, marketing, executives, IT, legal. AI incidents, we're gonna need DevOps.

Gerard Johansen:

If you got machine learning engineers, data scientists, if you have people that are doing coding that can help out. There are some things we're going to, as I said earlier, there's gonna be some technical stuff coming out that, hey, we need to verify if if this is operating within its guardrails. Is this LLM a legitimate one or or a malicious one? So those types of people are awesome to have in here. The best time to have discussions, we're gonna get into some of the the the pre preloading readiness is before an incident.

Gerard Johansen:

Is to go, hey, I understand you're a DevOps or machine learning engineer. If we have an incident, you're gonna have to drop what you're doing and and assist us. And management and executives know that, we may not be able to push code out to within the time period that we identify because we have an incident. It happens. Here's where we get information sharing.

Gerard Johansen:

AI work groups. This is something that we tossed around with some internal people is the best way to work is once a month your key stakeholders on on AI are essentially meeting and talking about, hey, what are what are we using? What are the new things that we wanna bring on? What are people using just from a shadow AI perspective? All all good to to understand.

Gerard Johansen:

I like this term. I did not come up with it. This was a group that I I'm part of talking about AI fluency index. This is across the board. So one of the things that we need to think about too is we have a tendency to believe that AI sits with DevOps or AI sits with, you know, machine learning or or engineering staff.

Gerard Johansen:

When we think about the the amount of surface area that AI is impacted, so you may have accounts payable using an AI powered accounts payable management system. If you've got Windows, everybody in the organization has access to Copilot, especially if I believe now we're going to e seven licensing. So, with that licensing comes that. So they have the opportunity to turn Copilot on and run it against SharePoint as a rag to go in and extract key data points or go through their email or ask a question. Alright?

Gerard Johansen:

Gemini, if you're using GCP or Google Workspace And you've got Gemini. All of these things. So understand what what your organization is using. What are we using as our use cases? It's good to understand when we talk about turning some of this off if we have to.

Gerard Johansen:

What are you using? What data are you making? And how much of it is being used in your normal operations? One of the key data points you wanna understand when we're talking containment, meaning we have to shut things off, is if you work in healthcare, for example, and you go down to radiology and radiology is using a proprietary AI imaging software that goes through x rays or CAT scans or MRIs and identifies things that the doctor should be looking at. And we have to shut that off because it's been wonky or hanky or doing something weird.

Gerard Johansen:

What what is the use case and what is gonna be the downstream impact? If you've done IR in healthcare and you have to shut down Epic and you gotta go talk to the to the brief the the delivery staff, especially the nurses, you know, it's you gotta wear one of those bite suits because they get really really upset when you move them to whiteboard and clipboard. It's the same thing. We're moving this, we may have to shut this down. Obviously that shadow IT is something that we wanna wanna address but it's not going away.

Gerard Johansen:

Here's the thing, this is gonna give you info as to what you need to do from a containment perspective. What you're gonna need to turn off, what you're gonna need to limit access to, and what are some of the potential vectors that you're going to to see here. This feeds into what I refer to as a postmortem. Or correction, premortem. If you're familiar with postmortem, that is after somebody dies, the medical examiner goes ahead and finds out how they died.

Gerard Johansen:

A pre mortem is we look at a situation where our incident response and readiness program failed to address something. Right? We failed to address malicious use of an LLM. Alright. So this is where we come up with unknown or previously untested incident response scenarios.

Gerard Johansen:

So we go ahead and we review what we have from a planning week, and what we wanna do is convene a postmortem or a premortem. It's an hour long, and it's a worst case scenario that we can come up with. So why did this happen? Let's take a look at, as I said, somebody loads in a untrusted LLM, runs it locally, points a whole bunch of of code at it to do some things and and a complete copy of our our source code for our proprietary software gets shipped out. Okay.

Gerard Johansen:

So what are the failures in our ability to respond to that? Look at those failures. But we didn't we didn't have we didn't have access to that individual or that person didn't con contact us or we don't have insight into what was loaded because we can't forensically get it out of the system. Whatever the failures are. Maybe there's a communications failure.

Gerard Johansen:

Rank those. Get those merits of those failures. And then look at the potential impact and that's where you start planning around. Hey, if we have somebody doing this, it very well may be you reach out to everybody in that team and say, hey, has anybody else used this LLM? Where are you getting these from?

Gerard Johansen:

Let's talk about verifying these these models that we're pulling down and running locally. So we can remediate those gaps and those failure points and then carry on from there. If you find yourself too rigid, adjust for flexibility. If it's not clear, put in some additional details. But this is a really good way to really have actually a little fun with this because once you get the really scary ones down, you can start getting a little bit more into different scenarios.

Gerard Johansen:

But this is a really really good way to start building out how your incident response plan is gonna function given a set of scenarios. It's not a one to one mapping where you have this scenario, we do this. It's hey, we have some gaps here in our understanding on what people are downloading. So again, we may have to go to that fluency meeting once a month or a fluency index, an inventory of what we're using and how we're using it. The quarterly readiness review, again, this modality that I've had to contend with in a lot of organizations that I've done consulting and even worked in, is this IR plan gets gets put together five, six years ago and we do a cursory review once a year and we add a little bit here.

Gerard Johansen:

Take a little bit here. Change some names in the phone tree, but we don't do anything else. And then we put a new date on it, slap it. We hand it to our auditor. Auditors like yep, you have an IRP.

Gerard Johansen:

That's out. Alright? A continuous review process is what we need. We need to do at a minimum do this quarterly, but if you can get away with the monthly, this is what we're going to do. Tie it into how you do security operations.

Gerard Johansen:

This is where you take a group of responders and stakeholders and you go over, hey, have we had to actually fire this up in the last ninety days? Alright. Are there any new threats or risks that we are bringing in that we need to pre mortem? Right? So let's take a look at something like OpenClaw, where you have this technology, it does a lot of different functions, it's awesome.

Gerard Johansen:

That gets introduced into the environment. That's the question. If somebody were to use this maliciously, what do we need to do from a response perspective? Do we need to destroy it? Do we need to destroy access to any of these technologies?

Gerard Johansen:

Alright? Once we understand that, then we can hey, what's the real scenario with somebody pulling this down and spinning this up is this is our risk, this is our exposure, this is how we would have to respond to it if it was weaponized against us or a user or users inadvertently disclosed confidential data. Take these scenarios that you're getting out of your pre mortem and at and this is where you would actually craft out a exercise. Alright? Table top exercises are critical.

Gerard Johansen:

I understand that it's hard to do one a year to get everybody in a room for three hours. You do not need to have a three hour tabletop. You don't have to have an hour tabletop. Twenty minutes from one of those scenarios and really start what could actually go wrong and how could it impact our organization. So this is where we would craft out this exercise.

Gerard Johansen:

So again, I've hit a few times. Devs were tied tied a code base to a visible AI bot or MCP server. How would we respond? Alright. What if they use what if we're facing a threat actor that is basically able to modify scripts on the fly?

Gerard Johansen:

Alright. Our only option is we shut down access to the network for a few hours to get some sense of where they got in and and what's going on. Awesome. Have you communicated that to the executives saying if we see one of these moving at that speed, our option and our only option it actually simplifies your problem is to is to actually just shut down connectivity until we can get maybe four hours eight hours. Alright, What if our chat bot starts offering up confidential information or just swearing at people?

Gerard Johansen:

Okay. Alright. What do we need to do there? Well, we're gonna have to pull in marketing and communications to craft a statement saying our chat bot wasn't supposed to swear at you. I wasn't supposed to get upset.

Gerard Johansen:

We're working on it. Alright? Focus on those specific actions needed to bring this to a resolution. What will we need to do from a resolution perspective? It may be saying to everybody, hey, we're gonna have to basically ban your use of this bot or this tool until we get some situational awareness.

Gerard Johansen:

Give us forty eight hours, that's what we're going to do. This is all gonna start getting into some written documentation very quickly. Alright? Pull in DevOps, pull in your AI engineers, they're key. They're gonna give you some and then they have some ownership in it.

Gerard Johansen:

They may understand a little bit better going, why can't I use this tool I downloaded from a Russian website that you know, scans all my documents and and gives me a really good summary. Something to think about there. Information on tactics and techniques for the criteria. There's some resources in there that are really good for understanding. We're already starting this with the Atlas project.

Gerard Johansen:

And they I believe that's MITRE. We're already starting to see kind of a a one to one between the MITRE ATT framework and what we're seeing from AI, so understand that. Your escalation path and your immediate actions are gonna be different. Alright? So we want to have here's kinda one of the not just AI but every instinct.

Gerard Johansen:

Key actions are preloaded, meaning you have these discussions during your pre mortems, during your quarterly readiness reviews about what we're gonna do to isolate, what we're gonna do to contain and how are we gonna escalate who gets pulled into those discussions. I talk about sanity check with our dev, with our leadership, with our IT and then test. Test this as much as possible. Doesn't need

Gerard Johansen:

to be three hours once a week. It may be twenty minutes. Maybe fifteen minutes going, hey, what do we do if? Now? That's a great question.

Gerard Johansen:

What do we do if is a great way to start it? Here's the thing plans are worthless, right? We're automating much of it anyway, but the planning is everything. The all of this work that we just talked about in the last fifteen, twenty minutes, this is gonna give you some actionable steps to take so we actually build in one of the things I said at the very beginning. Clum is contagious.

Gerard Johansen:

What breeds panic, what breeds that is not knowing what to do. So here, we're building in some concrete things to do to get pre planned to understand so that the next fad that drops in April, we're actually ready to to respond to. We actually already have some of this built out. We're not gonna be caught off guard. With that, there's not

Gerard Johansen:

a single threat and we're just gonna have to ride this out. As I said at the beginning, it's it's just the next iteration of of forty years of how we respond to these things. You need to understand what are you using for AI and what are the threat actors using it. Just like anything else. How are we using Active Directory colors how we respond to a threat actor using compromised credentials or some sort of Kerberos based attack.

Gerard Johansen:

Same thing here. So nothing's really it's rhymed. It's not the same. It's rhyming. The big big thing is be adaptable.

Gerard Johansen:

That Semper Gumby because this is gonna change. Where we are a year from now is not gonna be where it's gonna be completely different. AI readiness, that is good. The plan is good. The process is what is really gonna make this.

Gerard Johansen:

This information sharing, this understanding threats and communicating that cross team, intra team, inner team, that's really gonna be what's gonna make the difference. It's great to write these down. It's great to have a plan, but that's the key to make it indispensable. With that, we've got a couple of minutes for questions. If you think of something or you're just, you wanna chew on it a little bit, I am SockPuppet on Discord and there's also my Gmail address if you wanna hit me up directly.

Gerard Johansen:

I'm happy to have a conversation, love talking. So, with that, any questions?

Jason Blanchard:

Thanks Jerry.

Deb Wigley:

Jerry, you did great.

Gerard Johansen:

Thank you. Well done.

Deb Wigley:

Perfect timing. Time left for questions. It's like you've done this before.

Gerard Johansen:

Once or twice.

Deb Wigley:

Once or twice.

Gerard Johansen:

I didn't shut the Internet down this time, apparently.

Deb Wigley:

No. No. Mm-mm. Still seven minutes. Six minutes.

Deb Wigley:

Yeah.

Jason Blanchard:

So if you have not yet checked in for Hackett in the Black Hills Discord server, you can do that now. If you have no idea what I'm talking about, every time you attend one of our Anti-Cast webcast or the news on Mondays, you can check-in in our Discord server, and we give you credit for it. Once you attend, we send you a reward anywhere in the world. We just sent something to Vietnam recently. So that's

Deb Wigley:

We're like good. Yeah.

Jason Blanchard:

So ten, twenty, thirty, forty, 50. We'll send you rewards and say thank you. And then there's a new one for when you reach 100. So that we're still working out what that's gonna be.

Deb Wigley:

It's gonna be here before you know it for sometimes. We gotta figure that out.

Jason Blanchard:

One last thing. I did put into the chat. There is the incident response. There's two things in the chat. Jerry's teaching a class on March, if you would like to take that class on incident command.

Jason Blanchard:

And so you would get a chance to learn from Jerry for sixteen straight hours. It's one of the reasons we do these free webcasts is to see, like, is this something that I want? Is this what do I want from this person? And so it's an opportunity for you to see what you're getting yourself into, and so I think you would have a really good time taking Jared's class. And then second, I I did put the link for the free infosex survival guide for the orange book.

Jason Blanchard:

That's the incident response guide. We just finished it. It's been available for about a month or so. You can get it for free. If you live in The United States, if you live outside of The United States, you can go ahead and download or read it from our website.

Jason Blanchard:

You don't have to, like, sign up for anything. You don't have to register for anything. Now you do if you want it mailed to you because that's the only way we can mail it to you is to tell us where to mail it to. So, for those of you that clicked the link already to a website called Spearfish and put in your contact information, thank you for trusting us. Yeah.

Jason Blanchard:

You didn't have to. Alright, Jerry. Let's take a look at the questions that we have.

Gerard Johansen:

We got a few here. Voltaire regarding threats. Do we want to consider attacks from AI systems themselves from trusted environments versus threat actors using AI? You may want to answer that question. You may want to consider that as one of the criteria when you start developing your planning is what if that's a really good a really good scenario to actually pre mortem out.

Gerard Johansen:

What if we set up an environment internally that we host and it goes it goes rogue. What do we need to do there? And it very well may be our containment is to pull the plug. I'll ask Skynet or something like that, but it is something to consider. So that may actually be an internally generated incident and your criteria and how you would go about validating whether or not it's gone rogue and then having some sort of containment around there.

Gerard Johansen:

So I am not going to throw out any attacks. I don't I don't necessarily agree with. Well, it's never happened. You know, if you can conceive it as a possible a threat vector, so can the threat actors or so can now potentially the the actual AI itself. So absolutely.

Gerard Johansen:

So let me kind of

Jason Blanchard:

Here's another one, Jerry.

Gerard Johansen:

Yeah.

Jason Blanchard:

What's the best preparedness exercises for a very small IT team? I'm gonna take this one real quick, backdoors and breaches. Do a Backdoors and Breaches demo for you and your team. Just go to backdoorsandbreaches.com. Go ahead and click on the link to say, you wanna do a training session.

Jason Blanchard:

I and Deb will teach you how to play Backdoors and Breaches, totally free. It's not a scam. We don't sell you anything. We just teach you how to play backdoors and breaches. Alright, Jerry.

Jason Blanchard:

What's your thoughts?

Gerard Johansen:

Totally agree. Especially if there's it's a really fun way too. I think one of the things that that I I've run tabletop exercises. I try to game them. I try to get my tight five improv jokes out.

Gerard Johansen:

But oftentimes, it can be a a soul sucking process. So just something to think about. Something like backdoors and breaches gets you into the habit and it gets you talking about these things. Part of that test, part of that pre mortem analysis is assumptions that you have made about how we do things. You'll identify where those assumptions are incorrect and that's one of the best.

Gerard Johansen:

Even something like, I wouldn't say even, but something like backdoors and breaches where you're just having fun and a conversation comes out. All of a sudden you're like, hey, wait a minute. We haven't had that for three years. So, yeah, totally agree.

Jason Blanchard:

Yeah. Alright, Jared. Just to make sure that we get it within time.

Gerard Johansen:

Okay.

Jason Blanchard:

I got one question for you. Okay. So tell me about your class coming up. Just the thirty second

Gerard Johansen:

Sure. What incident command is, it's basically walking you through from that that bang and what you need to do from a coordination perspective. So we cover everything about what we talked about here, readiness all the way to managing all of the different persons, all of the different entities and how to work as an organization to get to resolution where you're back up and running.

Jason Blanchard:

Okay. So that class is coming up on the twenty sixth and twenty seventh, right after the SOC Summit. So if you haven't heard of our SOC Summit, and I know you're like, there's a lot of stuff, you want me to get a book? SOC Summit. We're doing a six hour livestream on March 25 all about SOC.

Jason Blanchard:

So everything that we can talk about during you know, in the SOC, SOC life, getting in the SOC, working in the SOC, all the things SOC related. It is a free six hour session. We got the the founder of Wazaq coming. We have someone from Sublime coming. We have, like we we found all the people that do cool stuff, and we brought them together for this event.

Jason Blanchard:

Jerry, sorry we didn't reach out to you ahead of time, but you're doing the class right afterwards. So

Gerard Johansen:

No. Don't worry.

Jason Blanchard:

Alright. We we

Deb Wigley:

might have socks, like actual socks.

Jason Blanchard:

We do. Yeah. We have the new socks shirt coming and socks socks. Mhmm. Oh, that's right, Deb.

Jason Blanchard:

Alright. So last question, Jerry. If you could sum up everything today and one final thought, what would it be?

Gerard Johansen:

The the if you don't do anything but just sit down with some of the key people in your org and start having these conversations, you're gonna be ahead of the ahead of the game anyway. So if a if you just say, hey, I let's just start from zero, start some of these conversations, that's the action step. That's the action item. Go out and do it. Have a set a meeting for next week and just start talking about these things.

Jason Blanchard:

Thank you, Jerry. And don't forget to take Jerry's class if what you found today was interesting and you could see yourself sitting there with Jerry for sixteen hours and learning from him, asking him questions. It's very interactive. It's very live. It's very virtual.

Jason Blanchard:

It's slightly like this, but it's for sixteen hours. Alright, Jerry. Do wanna stick around for just a couple more questions? Absolutely. Alright, everybody.

Jason Blanchard:

Thank you so much for joining us. We're going into post show banter, and so at this point alright. The webcast is over. Good job, Jerry. Thank you.

Jason Blanchard:

You.

Deb Wigley:

So yes. There are a couple questions that I think we missed.

Jason Blanchard:

Yeah. This one was, should we make a separate AI IR doc or weave AI into the existing process doc?

Gerard Johansen:

There's a case to be made

Deb Wigley:

on

Gerard Johansen:

on both. What I would like what I would do personally is keep it within the same doc. And when I talk about classification criteria and escalation criteria, that's where I'm gonna put a lot of this pre planning in there. A lot of our processes at the technical level are gonna be hopefully gonna be automated. It's more about establishing those criteria and and really kind of showing our work when we came up with the structure of how we deal with it and why we've automated some of those those tasks away.

Gerard Johansen:

So the in short you could do it both ways. Have found one central document though is is really good as a a kind of a finished finished deliverable as that process.

Jason Blanchard:

Alright. Deb, do see another one or do you want

Deb Wigley:

me to grab it? Yeah. Give me a second. If we did not answer your question and you wanna hop it in Discord again just so we see it. Okay.

Deb Wigley:

Oh, yeah. Any proven techniques for dealing with AI leadership who is very quick to accept the risk of running random hugging face models and prod?

Gerard Johansen:

Yeah. I I I as it kind of the the management on me as one and this gets into kind of outside of the response and prep side is this is where you need somebody that on that team that's gonna accept that risk. I would say the best thing from a standpoint is if they're gonna do that and they're gonna host it locally just make sure they're not. You know have access to everything else. How you actually get them that to either understand that risk or?

Gerard Johansen:

How to put this? It's a kind of outside your purview and it's it's the risk appetite of the organization. What I have found is the organization has no idea, especially the risk managers have no idea what they're signing up for when they're talking when you say I want to download this model run it in Olama with Web UI, and we're gonna point that at documentation to get summaries in this, that, and the other thing they're gonna go eyes glaze over. So what I would say is make sure that you communicate that risk to those individuals that have to make that decision. It's just like anything else.

Gerard Johansen:

It's a risk based decision, but management it's oftentimes not in. Not your decision to make. I've all I can do is communicate the risk to them. That's basically the the approach that I've I've taken in the past. But sure and I don't have a good answer for you because it's tough.

Gerard Johansen:

It's a risk. It's it's how you communicate the risk in your organization. It's just we've we've made something new.

Jason Blanchard:

There's a call I'm sorry. There's a there's a question here that I'm gonna answer and then get your thoughts to Jerry because I

Deb Wigley:

you have

Jason Blanchard:

an answer. Is it possible to gain pen testing experience by volunteering to help small businesses? My response is going to be, I would encourage you to do Hack the Box, Try to Hack Me, or other, like, places like that, and then really focus on your write ups. One of things that Black Hills, you know, I work at Black Hills too, is that we hire like, if you think about it, we hire report writers that can pen test. We don't hire pen testers that can write reports.

Jason Blanchard:

The report is the product. And so if you are unable to write the product, if you're unable to produce the product, then all that pen testing, you know, it just kinda doesn't really work out. So get really comfortable at doing CTF challenges and then putting here's what I did and here's what's super important too. Also document what you did and did not work. Most people are like, well, I did this and it worked, and I did this and I worked, and I did this and it worked, and I did this and it worked.

Jason Blanchard:

But what a team really wants to see is there's a lot of things that we tried and it did not work, and so well done for whatever defenses you have in place because I was not able to do it that way. Jerry, what are your thoughts?

Gerard Johansen:

I would say I agree wholeheartedly. What most small organizations, nonprofits need is defense. Then it help with just basic system hygiene and defense. So the only way I would say is, if you wanna volunteer is if you're good at defensive meaning firewall reviews those kind of things the documentation. I'm going to talk about.

Gerard Johansen:

Am not. It's been a long long long time since I've done some very light pen testing. I will tell you in the IR world in the in the deeper world. That is one of the things also that we have a real hard time as people do some amazing things and then two days later. It's like okay.

Gerard Johansen:

I need a report for management. They're like. So yeah, it's it's I would say is one of the best things to do there. So, and the other only caveat is develop a methodology. I can speak to differ.

Gerard Johansen:

We have a lot of, again, a lot of talented very good people. I came out of law enforcement, So it was drilled into our head. An investigation methodology looks like so it comes. Wouldn't say naturally but it comes through training. I've met a lot of deeper practitioners that can dig through.

Gerard Johansen:

Countless artifacts, dig out the most obscure thing. But if I tell them, well, why did you do that? And and what was the precursor to you actually taking that path? So have they can't articulate it. So so develop your processes, develop how you approach and tackle problems.

Deb Wigley:

Right.

Jason Blanchard:

In the Zoom chat, I did put the link to John Strand's AMA happening on Reddit right now. He's got a ton of questions. He respond he's responding to all of them. His goal is to respond to every single question, so feel free to ask him whatever you want. It is literally ask me anything.

Jason Blanchard:

He does does give some criteria, but I was like, nah. Ask him anything. So, Jerry, final final question, and then I have one Jason question after that. So final audience question is and I'm I'm gonna take this person's, like, statement and kinda turn it into a question. And so Sure.

Jason Blanchard:

Have you ever sitting with potential, you know, stakeholders and and they start talking about risk, you said, here's the risk. Here's the things that I, you know, I believe this is the risk that we are we have. And they go, okay, cool. I accept that risk. Do you ever get them to, like, put it in writing that that person, their name, accepts the risk that you've just given them?

Gerard Johansen:

Generally, yes. Generally what you see is so this gets into somebody actually asked about governance risk and compliance on that side. There's they hold what's referred to as a risk register and this is every risk that somebody like myself or somebody you know an auditor a security assessor identifies and it's either it's tracked. It's either accepted. It's either mitigated.

Gerard Johansen:

It's either transferred or they do some sort of control around that everybody own everybody in your organization. There is a risk, there are somebody that owns that risk whether it's a business unit or an individual. So if you're. Let's say your finance department or your accounts payable department and it was some somebody's cousin that coded that application that manages multi multi million dollar transactions and they are saying no no, we're not going to go out and get SAP because I like my cousin's application and you're saying hey, it's it's got all of these vulnerabilities that aren't being patched. They're like nope.

Gerard Johansen:

We're going to go through it. It's like yeah, you're a doctor in a lot of ways on the defensive end and even on the pen testing end. You can tell a person hey, don't spit. Don't don't smoke don't drink. Don't don't eat bad food, but if they continue to do it.

Gerard Johansen:

You know you can't you can't force them to do it in a lot of the ways. This is the same thing is what where it comes into is having that individual or that business unit that is so yes, short answer. Yes, absolutely.

Jason Blanchard:

Okay. Alright. So here's my final Jason question. Alright, Jerry. Why does Jerry why do you why do you like to teach?

Gerard Johansen:

Really, what I I I that's a really good question. Why do I love it? I I enjoy kinda coming up this this one in particular. This is something that that I created and I love sharing it with with other people. The major the major win for me just as I said at the end, if if you go back to your organization and grab one or two or three of these nuggets, which is my goal, you don't have to wholeheartedly accept Jerry's approach to AI response.

Gerard Johansen:

But if you go back to your organization and implement and it helps, that's why I do this. That's really why I do it. It's one of those where you go out and actually get something done and actually move the needle forward to a more secure operating environment and make people safer. I don't know where these people are. They could be in healthcare, they could be in public sector utilities, education, those kind of those kind of things that are very important to a functioning society and just moving the needle just a little bit.

Gerard Johansen:

That's a win in my book.

Jason Blanchard:

Well, Jerry, that's a good place to stop today.

Gerard Johansen:

Thank you.

Jason Blanchard:

Your class is coming up on 2627. Thanks for teaching for Antisyphon Training. Thanks for No. Thank you. Cast today.

Jason Blanchard:

Alright. Alright, Deb. What are your final thoughts today for the

Deb Wigley:

Final thoughts? As as always, thank you for showing up and being kind to each other. Thank you for answering questions and not treating people that have silly questions or that you might think are silly with contempt and you some guys are always just so kind to each other. So keep doing that. Keep being kind, especially in the crazy craziness that's happening in all around the world today.

Deb Wigley:

Yeah. Keep being kind to yourselves and each other. And final thoughts with Jerry. Absolutely. Oh, Jerry.

Deb Wigley:

Yeah.

Jason Blanchard:

Yeah. Jerry. Jerry.

Deb Wigley:

Take care of yourself and each other. Oh, thank you.

Jason Blanchard:

Alright, everyone. Thank you so much for joining us. We'll see you next time. Jerry. Megan, Megan, do the thing.

Jason Blanchard:

Killing the fire. Drown it in a baby pool.

Episode Video

Creators and Guests

Deb Wigley
Host
Deb Wigley
Deb Wigley is the Director of Kindness and Generosity for Black Hills Information Security (BHIS). She joined the team in 2019 after celebrating 20 years of working in customer engagement and satisfaction in the Automotive Industry. She brings her passion for helping and serving people to the work she does at BHIS. The part of her role she enjoys the most is interacting with the community through our webcasts and educational content, our Discord servers, and conferences. She loves being a mom to her four kiddos and in her spare time, she enjoys reading, hiking, frequently entertaining a beach day, and being whisked away on rewilding adventures with her husband of 20+ years as much as possible.
Jason Blanchard
Host
Jason Blanchard
Jason Blanchard has been happily adopted into the hacker community at Black Hills Information Security (BHIS) since 2019, even though he “works in marketing.” He’s had every dream job imaginable: teaching filmmaking, owning the world’s most famous comic book store, and fostering the infosec community efforts for SANS. While some at BHIS call him the “Director of Excitement,” he is formally known as the Excitement Co-Creator. In his day-to-day work of “sucking at capitalism,” Jason enjoys helping others, sharing his knowledge, and giving away lots of free stuff. When he’s not working, Jason spends time with his wife and daughter, hosts a semiweekly job-hunting Twitch stream, and enjoys writing short stories and performing stand-up comedy.
Gerard Johansen
Guest
Gerard Johansen
A cyber security professional with over a decade of experience specializing in digital forensics, incident response, and threat intelligence. After a decade in law enforcement, transitioned into the private sector working in large enterprise and consulting. During my tenure in cyber security, I have been fortunate enough to work on complex digital investigations as well as develop training and enablement programs for cyber security defenders all over the world.