Effective AI for Practical SecOps Workflows w/ Hayden Covington
E4

Effective AI for Practical SecOps Workflows w/ Hayden Covington

Jason Blanchard:

Hello, everybody. Welcome to today's Antisyphon Anticast with Hayden Covington. So Hayden's fantastic. If you never got a chance to meet Hayden, he's one of my favorite people in the whole world. Like, when I see Hayden at an event, I'm like, Hayden.

Jason Blanchard:

Like, seriously. So Hayden is a fantastic educator. He knows he legitimately knows what he's talking about. He's enthusiastic about it, and he's going to teach you today everything that he knows because he doesn't believe in gatekeeping. He doesn't believe in, like, locking this information behind.

Jason Blanchard:

And so he's gonna teach you this so that way, if you need to know it, you can. He also has a workshop coming up. He also has a class coming up. Very affordable if you wanna take those things. But Hayden is awesome.

Jason Blanchard:

And hopefully, you get a chance to if this is your first time here for a webcast or an anticast, thank you so much for joining us. I'm heading into the backstage now. Hayden, it's all yours, and I'll be here in case anything is needed. I'll see you for the q and a

Hayden Covington:

Thank you, sir. Alright. So if you are not aware of who I am, thank you for that introduction,

Hayden Covington:

Jason. He he mentioned that I love sharing knowledge. This is true, and it's probably a little further beyond that, is you're gonna have trouble getting me to shut up about the things that I find interesting, which today is gonna be all about, like, AI inside of a SOC.

Hayden Covington:

So that's the title of today's webcast. If you don't know what the title is, I don't know how you found your way here, but it's effective AI for practical security operations. So we're gonna talk a lot about AI, how you use it in a SOC, how you use it in security operations in general, and kind of, like, how you can utilize it correctly. Right? There's a whole lot of, like, white noise about AI and, like, how do we, you know, like, hook AI into our toaster?

Hayden Covington:

Like, you try and hook it into everything. Anything that you pick up from the store is connected to AI already, and it's it's getting a little bit silly. That said, the folks that claim that it is useless in, you know, a lot of different circumstances are also incorrect. So there's two sides, two polar opposite sides. One says AI should be in everything.

Hayden Covington:

They are wrong. The other side that says AI is not useful are also incorrect. Just like everything, there's a use and there can be far too much of a of a good thing. Right? So anyway, let's start real quick.

Hayden Covington:

I have an intro slide about why you should listen to me talk about a lot of this stuff. I'm the security operations lead for Black Hills information security inside of our SOC. We have a a managed detection and response service that we offer, and I do a lot of different things inside of there. So I do some detection engineering, some threat hunting, done some in incident response, do a lot of metrics. But lately, my focus has largely been around just operations from a bit little bit higher of a level.

Hayden Covington:

So how do we, you know, how do we level up our SOC operations, and how do we measure that we are leveling this up correctly? And a lot of that is going to be enabled or at least helped along by by AI. Right? You can use AI for a lot of things even if just as a sounding board. I spend a lot of time playing around with AI.

Hayden Covington:

I think that that's something I'll talk about that near to the end about why that's actually important as having some time to play with this stuff. But yeah. So let's let's jump into it a little bit. What this session is not going to be is this is not going to be, like, a session of just kinda talking to you about vendor demos and ChatGPT wrappers where you pay $30 a month and you get ChatGPT with a web UI over top of it. I'm not gonna talk to you about how you can replace your entire security team with, you know, Cloud Opus.

Hayden Covington:

I'm not gonna talk about all those different things, and I'm also not gonna talk to you either about things that are theoretical. Right? Like, on the preshow, we talked some and joked some about, like, AI sentience, and, you know, we're not we're not gonna get into a lot of those theoretical things. This webcast is all around scoped into things that work, that I do, that people that I work with or that I talk to do. These are all things that actually work, people actually use.

Hayden Covington:

So kinda just to set that that straight from the from the onset here. Let's get something straight, though. I mentioned this just a second ago. We're not gonna talk about replacing analysts. I really don't see a future anytime soon where analysts are fully replaced.

Hayden Covington:

I think we're at the point now where you can get by with significantly less analysts on your team, but I don't think that we're anywhere near replacing all of the analysts on our team. The analysts should instead be augmented by the AI that you enable across your team. So we're not again, we're not replacing people. We are augmenting them. That is going to allow your analysts to operate at a higher level and to get more done in less time and in ways that are more thorough, better tested, quicker.

Hayden Covington:

You are giving them better tools effectively, and these tools allow them to go from, you know, an analyst that can do x amount of work and multiplying that by five simply based on the tool set that you're giving them is a way to think about it. Because some people here, you know, giving analysts access to AI, and they get a little cagey about it. They're not too sure. But just it's another tool. That's all it is.

Hayden Covington:

So you're giving them a tool, one that's very, very powerful if used correctly. That said, the reason humans still need to be in the loop in SecOps, maybe a little bit of a hot take, is that force multipliers like AI will only work if there's a force in order to multiply. So if you multiply anything by zero, you come out at the end of the day with zero. So you need to be able to actually have a team and a team that's skilled and proficient in order to augment and multiply their capabilities. So train your humans first and then give them AI tools.

Hayden Covington:

What does some of that actually look like? And we're gonna dive into each of these each of these use cases a little bit here. But augmentation versus replacement, it's going to look like a couple of these things, for example. So AI generated investigation summaries. I personally, I'll be honest, was a little iffy about these at the start.

Hayden Covington:

I didn't quite understand, like I didn't I didn't understand the appeal. I didn't understand the the value. But once we and the SOC started generating summaries of our cases inside of the SOC, I quickly understood the value of a summary of an alert case. So I could show up to a case, and I can see 15 alerts have fired for this host. That is going to take some time for me to mentally parse and triage what is actually going on in this case.

Hayden Covington:

However, if I show up and the AI has taken the context from the number of alerts that have fired, what the alert titles are, if they're all the same host or the same user. And it comes up with a summary saying, hey. You had these five alerts fire. These are, you know, a classic chain of, you know, attacks that might be evidencing some sort of credential theft. You might want to look at x, y, and z next.

Hayden Covington:

Here's a sample query. I can choose to ignore that if it's wrong. However, it is an immediate upfront piece of context for me as an operator that needs to make split second decisions when I open a case. So that's just one example how I'm augmenting myself. Right?

Hayden Covington:

As an analyst, I'm not having the AI go and do all this work for me, but it is doing a lot of the background knowledge gathering or summarization that AI really, really excels at. Another one is sort of threat intel. If you've ever done any threat intelligence, it is just full of articles and a lot of clickbait. And especially now, where there's so many articles that I read are just very clear chat GBT generations. You need something to kinda filter out the noise.

Hayden Covington:

So, again, I'll I'll come back to what I do on the daily. That's sort of the whole point of this. We have a tool that, basically, when we run our threat report out to our customers each week, I have a tool that goes and looks through, you know, tens or twenties or thirties of different threat feeds and parsers through hundreds of articles and that highlights some of the ones that might be the most interesting. It also flags if any of these articles potentially have sources that are of higher value. So let's say we get six different articles all for the same thing.

Hayden Covington:

It tries to tell me which one is actually, you know, the best one. Because, again, if you've ever been in CTI, you've ever done a lot of this stuff, you'll find an article from I don't I don't wanna hate on them because I do like them, but I'm gonna use them as an example. From, like, bleeping computer. And they'll say, like, a write up of a specific attack that happened. But somewhere pretty early on in that article, there will be a key phrase.

Hayden Covington:

And that phrase will be something along the lines of as reported by company name. And that company name is not always gonna be bleeping computer. What that means is they're just reporting on their reporting. And so what you could do with this AI correlation is dig down into who actually wrote this article and get the first party intel for your reporting. And then quite possibly the most sort of powerful use case that I've seen for analyst augmentation here is drafting threat detections.

Hayden Covington:

If you've been to a number of Black Hills webcasts or any of mine before, you hear me rant at length about detection engineering. I enjoy it quite a bit, and I found that I can write more and better tested detections with the assistance of AI. Detections that I couldn't find the time to write before, I can now have written, and I am now operating as an approver versus the full author. Again, I'm gonna dig into these use cases. The AI is not always right, and we're gonna talk about that too.

Hayden Covington:

But the pattern between all three of these example use cases here that I really want you to latch onto is AI in each of these use cases is doing the grunt work. I, the analyst, the operator, am the one that is making the final decisions. In these investigation summaries, I show up to my investigation, and I can take the summary or I can leave it. It is only a gain as long as it doesn't hallucinate horribly and say something that's incorrect. But as long as you do it correctly, 99% of the time, it should be factually correct, and it's only going to serve to provide me with more context.

Hayden Covington:

For the threat intel piece, I make the final choice about what articles matter. It means I don't have to look through 800 articles from the last week. Instead, I have to look through 30. I have to look through these ones that are scored based on parameters that I've set. And then drafting detections, like, I previously gave an AI a very difficult task as I gave it basically a whole folder of open source rules, and I said, go draft some detections for this.

Hayden Covington:

And I honestly didn't expect it to work, and it came back with 24 detections that are now in the process of being deployed. For things that are more advanced, it might not have done as well. But for stuff that's cut and dry, it came back to me with something that I could look over without having to go through all of these different rules and check whether we had them and make sure that they're actually still relevant. It did a lot of that background grunt work for me, and then I went and validated that. So, again, the pattern through all of this that I really want you to latch on to is give the AI the grunt work, allow the humans to make the actual judgment to make the finalized decision here.

Hayden Covington:

I know we're getting all hyped up for AI stuff. AI is not free. And I'm not just talking about, like, you know, ChatGPT. ChatGPT is free. Sure.

Hayden Covington:

You'll probably get ads now, apparently. But these these free use cases, even ones that are truly free, you've you've all probably heard the saying, if you're not paying for a product, you are the product. AI is not free. There are costs to AI, and I'd be remiss if we didn't talk about them at least a little bit. Because this is part of the equation for bringing AI onto your team and allowing your your analysts to kind of level up using it.

Hayden Covington:

So there's a couple main considerations that you need to think about. The first is cost. The second is governance. And then the third is buy in. Buy in is an interesting one that I don't really hear a lot of people talk about.

Hayden Covington:

And I I listened to a podcast recently from I think it was one of the guys that put together Airtable as an application. He was in an interview on there, but he talked a lot about buy in and how important that is. So I'm gonna cover these things, and we'll we'll move on to the exciting use cases. The first being cost. API usage, especially, is going to add up a lot quicker than you imagine.

Hayden Covington:

If you've ever, like, turned on an AWS EC two instance or something and then forgot about it, you know what I'm talking about. AI is potentially on par with that depending on how you use it. It's not gonna give you the same amount of usage as if you turn up on, like, a big Redshift container and then leave it for a month. But it can still be pretty bad because I've seen seen pictures on Twitter of people's Cursor spending where they're spending, like, $30,000 a month. First of all, I don't know how you spend $30,000 a month on Cursor.

Hayden Covington:

I've I've used usage based billing on Cursor before when I was building some things on my own, and I never even got close to that number. But sure. Maybe. Maybe. Okay?

Hayden Covington:

But the the crux of this issue is API billing is going to be more expensive than subscription based billing in most cases, and high volume usage of either is going to potentially cost you. The token heavier workflows are also gonna cost you more. The detection engineering workflow that I'm gonna talk about in a little while that we use in our SOC is expensive. It uses a lot of context. Whenever we give it information, we give it as much information as we can.

Hayden Covington:

Garbage in, garbage out is another saying that you'll hear thrown around. That is true for AI. The more context that it has to a point is going to mean a better overall output. It's it's like if you had, like the best way I've heard it described is, like, if you have a very promising intern. If you take this intern and you throw them at some work and you give them no instructions, they might do an okay job.

Hayden Covington:

Occasionally, they'll probably do a really good job. But, also, there's a risk that they're not gonna know quite what they're doing. But if you have a really promising intern and you give them SOPs and you give them examples and you give them instructions and you're very clear about what you want, chances are what you get back will be pretty close to what you're asking for. So that kind of thing plays into the consideration here as well as those token heavy workflows in order to get a better output output are going to be expensive. And then this one is fun.

Hayden Covington:

This one's fun to talk about with security folks. It's cloud versus self hosted. I have, like, an m four max MacBook, so I could probably run most local models if I want to. I don't. I'd pay for Claude Max, like, from from my pocket at this point.

Hayden Covington:

Like, I pay for it because I use it and because it's going to get me, ultimately, a better result. For those of you that aren't willing to do that, you need to balance the use of Cloud AI versus self hosted or self locally run LLM models. There's a lot that can do a lot of very powerful things, but it's very similar in the comparison to would you rather put that server in your closet, or would you would you rather just go on AWS and stand it up there? Almost all of those comparisons back and forth apply. It's it's the governance.

Hayden Covington:

It's the cost. It's the maintenance. It's all those things. They almost unilaterally are kind of the same pro and con list I found, at least in my opinion. How you get around these concerns?

Hayden Covington:

You can estimate your volume. You can, you know, view either in line if you're using Claude code or something, or you can use plenty of other utilities to view your token usage. Token is how or tokens are how input is usually measured inside of, you know, of an AI space. It's not always, you know, one to one of a character as a token. It kinda is is a little weird.

Hayden Covington:

But you can estimate your token usage, and that allows you to also estimate your expected costs. That way, if you get hit with a thousand dollar bill at the end of the month, you're not shocked. Another really powerful tool, I call it powerful, it seems a little, you know, a little low hanging fruit, but you can set billing notifications. Black Hills, our SOC, does a lot of detection engineering work through GitHub pipelines, and we have agents that work within those pipelines and within those repos. I have billing notifications on.

Hayden Covington:

So if we hit a certain threshold of spend each month, it sends me an email. Because on one hand, I wanna have an idea if we hit a certain amount just so I'm aware. And then on the other hand, if something starts to run away and maybe starts to go crazy or there's, like, a run a runaway condition or runaway agent that's just wasting money, it'll become pretty quickly apparent. For that same reason, I'd be very careful around usage based billing similar to that whole e c two thing where if you set up an instance, you might very well regret that later on if you find that it's just kinda running and doing its own thing and costing you money. So usage based billing is important.

Hayden Covington:

You can also set caps to that usually. You can set, you know, considerations around how much the maximums are. You know, you're not, you know, fully out of it or or out of control with that. So make sure that you consider those things. I'm gonna kinda breeze through these next two because I wanna talk about the cool stuff.

Hayden Covington:

All my GRC friends will love this one, but policy and legal are also considerations with AI. You can send AI sensitive data in most of these situations by from, like, a legal perspective. You can get legal agree agreements with these providers. Whether or not they abide by them is you know, who who knows? And you can get their SOC two reports, but, you know, again, a lot of that is going to come down to do you trust what they are saying.

Hayden Covington:

And if you don't, there could be legal issues there. We should all be a little bit scared of lawyers. Just a little bit. It's healthy, I think. And no one wants to explain to a lawyer that you gave ChatGPT a bunch of sensitive information because you wanted to know how to write an Excel formula.

Hayden Covington:

I would not wanna explain that to a lawyer. So around the whole GRC space, make sure that you're considering what you can legally send to these models. You need to make sure your privacy controls are set properly. You need to consider the provider's retention as well. This is one I'll spend a second talking on is AI model retention.

Hayden Covington:

Even if they say they don't retain your data for training, they may still be retaining your data for abuse monitoring. So make sure of that. That means that they're keeping something, maybe just the prompts, but they're keeping something to monitor for people that are abusing their models, and that data is stored somewhere. There was a talk at the last Wild West Hackenfest Denver where the speaker said that you don't need to be concerned as much with AI providers training on your data. You need to be more concerned with where they're storing the data that you send them.

Hayden Covington:

And I think that that is a very great way to put it. The fact that your data is so so important that an AI provider is going to resurface it as part of the training in some way, I think we're all giving ourselves a little bit too much credit about how important our data really is or how unique it really is. However, when you send it to them, they're processing it somewhere, and they're storing it somewhere. And then in some cases, you might just be so locked down that you have to have a local model, which is not the end of the world. Data sensitivity, you can get around some of those controls by sort of sanitizing and processing.

Hayden Covington:

You could even have a local model do this. I've seen that where some folks just have, like, a local model that looks for host names, client names, IPs, whatever, and it rips those out. Warp, a terminal application, actually has, like, regex patterns to match for IPs or common, like, credential patterns of, like, tokens, and it will kinda strip those from any queries that you send. I think it's automatic. I've already talked about most of this on the last slide, but you could just insert mock data into a lot of these queries and get by.

Hayden Covington:

This one is important, team buy in. Right? People are going to be skeptical about this. It's a new technology. People don't fully understand it, and some people are a little bit scared of it.

Hayden Covington:

It's something that they might not fully understand that the whole world outside is telling them this thing's gonna take your job. It's gonna take your job, and you're gonna be fired forever. You're gone. Like, you're done. Like, if anybody told me that, whatever they're talking about, I would be a little skeptical of.

Hayden Covington:

That said, skepticism does not preclude them from the wave that is coming, that is here, that has long been here. AI is quickly becoming a critical part of security operations, whether we like it or not. You need to address those fears, but then you also need to work with folks to help them get through those fears and help them understand how these models can enable them to do a better job more quickly. Because I can do one thing. I could I can write a j q query.

Hayden Covington:

Right? Like, I've done it plenty of times. An AI model can probably do it quicker. Right? And and that's that's sort of the thing I'm talking about here is you can address these fears, and you need to help these folks understand that it's all about augmentation.

Hayden Covington:

You're not trying to replace them with these models. You're not gonna bring on, you know, Claude Opus four five and Claude Code and say, you know, go handle the SOCQ. Make no mistakes, and then fire your whole team. If you do that, let me know how that goes so I can laugh at you. Basically, the you're you're trying to augment your team, give them better tools.

Hayden Covington:

And then starting small is really important. I'm about to get into some of these use cases, a lot of which start pretty small. You're gonna be able to give folks easy access to tools that will allow them to fact check things, to format things better. You can be pretty powerful if deployed correctly. And then you have to let folks opt in.

Hayden Covington:

You can't force this on anybody. You can, but it's not ideal. They're not gonna use it well. They're not gonna use it properly if you tell them use Claude code or you're fired. Like, that's not gonna work.

Hayden Covington:

But anybody that's really kind of paying close attention will understand that if you're not utilizing these things, you are falling behind. Yeah. So this one kinda ties in as the last thing I'll touch on before we get into the actual use cases. The problem exists between keyboard and computer is what that acronym stands for. You need to teach your analysts the fundamentals first.

Hayden Covington:

That has not changed. If your team does not understand how security works, they cannot use these models correctly. I'm sorry. I can code pretty okay in Python. With an AI, I can code in languages that I've never tried to because I understand fundamentally how programming works, and I can validate that it's doing what I've asked.

Hayden Covington:

I will know I will never be as good at that as somebody who actually truly knows how to do it, but I will be better than I would have been otherwise. So if your team understands how information security works, they will be able to recognize if an AI is incorrect. They will be able to properly guide these AI to the correct outcomes. That is a really important one. And they'll be able to sort of correctly understand the limitations around these models and what they can and can't do.

Hayden Covington:

I'll use an example. As I I'm a big fitness nut, so I I love, you know, triathlon and running and all these things. I took all of my health data from the last year, and I dumped it into Claude code. And I was gonna go through and basically parse a huge amount of data. And I asked it to go through this last year's worth of data and just start doing a couple specific things for it month by month.

Hayden Covington:

And so it did that, and it came back to me ten, fifteen minutes later and said, hey. Here's your data. Here's the output format you asked for. You know? I I did a great job, boss.

Hayden Covington:

And I looked at it, and I was like, these numbers aren't right. Like, I I did these runs. These sucked. I remember them. And I questioned it, and I challenged this AI, and I said, is this really the data?

Hayden Covington:

And he said, well, no. We did the first couple months and then average the rest of it out. What? That is not that is not it. That is not what I've asked you for.

Hayden Covington:

So being able to understand fundamentals of information security, fundamentals of AI will allow you to recognize the times that you should push back on those AI. So the problem exists between the keyboard and computer. Make sure your people are trained so that they can properly teach this intern intern, the AI, how to do the job. Alright. Hey, Hayden?

CJ Cox:

Yes. There's quite a bit of discussion out there about displacement of jobs and things like that and Yeah. Acquiring the whole staff. It is likely that when you're talking about you've got like a five times increase in productivity, there will be job displacements for this. I've never seen a new technology introduced that doesn't have that effect, including the internet itself.

CJ Cox:

Right. There are all sorts of experts that lost out when Google became a thing. But I think it's very similar to just browser engines. This is multiplying knowledge. Yes, there will be less jobs in some things, But as always, the opportunities, I believe, and a lot of economists that I've read papers on, and I can get those into the chat eventually, they say that the opportunity always magnifies.

Hayden Covington:

Oh, for sure. I mean, a lot of that's gonna come down to the company too. Like, we know that companies already will just fire swathes of people because it looks really good and makes their stock go up. Like, that that is proven is that when you have a layoff, your stock will generally go up. How you can look at it or how the Blackhawk Hill stock has looked at it specifically is we've not fired our whole analyst team because now we have Claude.

Hayden Covington:

Instead, we've said, hey. We can take x percent more customers right now with our current staffing staffing because of the capabilities that we are now augmented with. But you are you are totally correct is there will be some displacement. And I think the unfortunate reality is the people that are skeptical of this technology and don't understand it are probably the ones most at risk because you're not enabling it, and someone is going to surpass you that has that additional capability.

CJ Cox:

Absolutely. We could do a whole webcast on Oh, we could. Good answer. Thanks.

Hayden Covington:

Thanks. Yeah. I actually cut, I think, like, 10 slides out of this section because I wanted to get to actual use cases. That's why we're here. I wanted to get a lot of that out of the way because that is important.

Hayden Covington:

If you don't understand how to use AI, you're not gonna be able to use it properly. So let's talk about some use cases. I promised workflows. I'll also have the prompts on, like, a a Notion web page after this. They'll also be in the slides, which, you know, are in the Discord channel as well.

Hayden Covington:

You can get a lot of these prompts. They are starter prompts. I wanna really, really emphasize that. These are built around or off of similar prompts that I've used or that Black Hills has used, but they are simplified. They have anonymizing information removed, obviously.

Hayden Covington:

And a lot of the prompts, like, you're gonna quickly realize this. Once your prompting becomes specialized to you and your yours your use cases, it becomes significantly better. So a lot of these, I've taken. You'll see you'll see an example. But, basically, these two use cases here, I'm gonna separate them into things that you could start this week, things that you could have ready and ready to go this week.

Hayden Covington:

And then things this month that will take you a bit longer are gonna require more setup, are gonna potentially have a higher payoff, but are actually gonna require, you know, maybe some cash, maybe some time. You know, let's break it down. So things you could start this week. This is the ultimate low hanging fruit. If you have not touched into AI yet and your team needs to start utilizing it, shared projects is going to be what you need to do as soon as this webcast is over.

Hayden Covington:

A shared project and I'm gonna talk a lot around Claude because that's my my flavor of choice. It's like Linux. Everybody has their favorite kind. Claude is my my favorite one, my favorite little son, my little AI. Basically, a project is a chat window within your AI that has instructions in it that are persistent across all the chats in that window.

Hayden Covington:

It also has files that you can attach. So basically, you can build out these different specialized projects with specialized instructions with files attached to them that gives even more context. So that if you can start to imagine the kind of things that you could do with that, it gets to be pretty, pretty powerful. For example, we have a project on our team that basically is a security analyst. It has in it a lot of documentation on how we do our playbooks, how we operate.

Hayden Covington:

It has documentation around our syntaxes that we use in our SIEM, where it talks about how to do things correctly, how to do detection engineering. Like, some folks even drop in, like, information security books, whole books into these things. And then it gives it specialized information and context to allow it to answer your questions correctly. And then even then, it can get further. You can have integrations with these things, especially on, the Teams plan.

Hayden Covington:

So Cloud Team can also connect via, you know, integration or whatever into our Jira. And so I could ask it, hey. When is the last time that we've seen, you know, this sort of executable? And that might not be the smartest way to search that. That's a pretty poor example, but that is some of the things that you can do.

Hayden Covington:

And these projects, you can share them across your team. That security security analyst project that I'm talking about, anyone in our team can use that. And what that means is anyone in our team has access to those exact same instructions, meaning that we are all operating from the same baseline. If everybody has their own prompts and their own way to do things, it's gonna be a mess. But shared projects are going to be immensely powerful.

Hayden Covington:

You can have like, I have a couple ideas up there, like Threat Intel summarization, a project manager. I have a project manager shared project on my own cloud plan that if I'm working on, you know, vibe coding something or I'm working on just a project that I'm not quite sure how to parse, I go back and forth with it. It gives suggestions. Sometimes they'll even have have it generate, a PRD, which is a project requirements document. And that's what I give to Claude code because it is totally fleshed out this whole use case in excruciating detail.

Hayden Covington:

So these shared projects are number one. If you get anything out of this, any use cases, this is the one that you gotta get to. Building a good agent is usually gonna start with, at least in my experience lot of these things are gonna be from my experience. You give it context around its role. You are a senior SOC analyst at company name.

Hayden Covington:

You give it that context so it can understand this is your job. This is who and what you are. You feed it documentation. You feed it details. You give it all this context, and then you define what your output should be.

Hayden Covington:

That is also pretty important. How you want it output matters. Otherwise, you know, you'll get varying results. You may want it in copyable markdown. You may want it just in a code block for you to click copy one time.

Hayden Covington:

Like, you can you can define these things inside of these prompts. And then once you've built this agent, you are not multiplying the value across your team unless you share it. So you can do all this data gathering and all this fancy building. But until you actually share it with your team, you are not nearly getting enough value out of it. Let me talk about an example of of this.

Hayden Covington:

So it's a pretty common use case, maybe not as much in SOC as it should be, but across, like, software development, AI code reviews have been around for a bit. But this one is very powerful in the SecOps space. So we have all of our infrastructure is as code. Right? And so when I push a new rule, it pushes to GitHub.

Hayden Covington:

What that means then is it passes through an AI agent that validates based on our instructions whether or not this detection is ready to go. And even then, it still goes through a pipeline that does more validation. But the agent is checking for common issues that are very, very easy to spot. So it catches syntax problems. It can suggest optimizations.

Hayden Covington:

It can flag things that are broad. Anything that's black and white, right or wrong, like, that that is not negotiable, like, you can't have, you know, double quotes here because it breaks everything, like, those things AI is excellent at catching. The stuff that we humans miss with our our fleshy brains, the AI is not usually going to miss. So AI code review can review detection syntax, and it does an excellent job. I will say our our detection code review prompt is something like combined, it's multiple different files, actually, that reference each other.

Hayden Covington:

It's something that's, like, several 100 lines wrong. It is complex, but it does an excellent job. If you don't use infrastructure as code, you could just have a shared project across your team where you kinda talk through these different topics or or have the prompt in here where you can talk through your detection code and have the AI validate this. That said, suggested implementation, in my opinion, would be just through GitHub Copilot. We have had better results in the Black Hills SOC with Claude doing these code reviews.

Hayden Covington:

I I it's relatively close in most cases. Copilot does a good job and is very easy to set up. You just have to pay for it. Claude does a better job, but it's probably gonna cost you more. So kind of up to you whether or not you want to pay that extra price.

Hayden Covington:

I would just say start with GitHub Copilot. Very easy to set up. So start there. If you're getting value, consider comparing the two. Here's an example of what one of those prompts might look like.

Hayden Covington:

So we give it its its context right at the top. Hey. You're a senior detection engineer in charge of reviewing detection logic. And then we tell it what to look for. We give it some environmental variables, like what SIM we use, what log sources we have access to.

Hayden Covington:

We give it links to documentation, which, spoiler alert, if that's in your GitHub repo as well, you can just tell it where to look in your repo for these things. We give it examples of known good. We give it how to respond. And then something that we found that works extremely well is we actually give it a checklist that it has to fill out as it does this review. So that way, it is actually forced to validate each of these steps, and then it outputs this checklist at the end.

Hayden Covington:

So you can see that it checked all these different things. So I will say I will caveat all of this with giving it examples is good, but it also you have to be very careful about how it uses those. Because sometimes I've found that it'll latch on to those examples and use bits and pieces from those examples. That said, if you give it examples of what does and doesn't work from a very broad, high level view, it gives it a pretty good context. But this is just an example of, like, our first use case.

Hayden Covington:

So I gotta speed a little bit through. And I guess that's what happens when you do enough webcasts as you go from trying to find content to trying to not have too much. This is another thing that people overlook. Your prompts should be version controlled. The last thing you want is to spend hours or tens or hundreds of hours using and validating prompts, make a change, and then have something break, and you don't know how to revert.

Hayden Covington:

Your prompts are starting to become very, very, very valuable. I think that's called, like, alpha. Like, unique context to you, like, business differentiating data to you or some in some way, becoming very, very powerful for you to utilize. So your prompts have to be version controlled. That's that's becoming a nonnegotiable.

Hayden Covington:

You need to be able to share these across your team. They need to be consistent. So if they're version controlled and your team is keeping up to date on these versions, everybody's working off of the same prompts. But, again, that's very important. And then prompts actually can involve partially into documentation.

Hayden Covington:

So your prompts can actually become documentation of what you should expect your end result to look like. So, for example, in our detection review, if we say, hey. Whenever you name a detection, you need to start it with the first part of, you know, the kill chain. Start it with, you know, initial access or whatever. Right?

Hayden Covington:

We could say that, and the AI will do that throughout its, you know, iterations, but that can become part of our documentation. You know? We can pull that out and say, okay. Well, this needs to go into our docs. We could have the AI do that.

Hayden Covington:

Like, these prompts are should be very carefully controlled. I teased a little bit about the sort of SOC analyst agent thing. Having a SOC analyst agent that you can have parse through different information for you is gonna be pretty pretty good for a SOC analyst, I will say. There's nothing worse than trying to parse through just a ton of, like, garbage data when you're in the midst of a security investigation. And if there's anything that AI does really, really well is parse through ridiculous slews of garbage data.

Hayden Covington:

So if you can have an agent that is curated and ready for you to drop in logs, for you to describe activity and give suggestions on where to look or what to look for, for it to guide maybe you, in some cases, on other suggestions. Kinda like you're you're creating a soundboard for yourself, essentially, is what you're doing. So I could say, hey. I'm seeing, you know, execution of this binary. I don't have the logs that I normally would expect.

Hayden Covington:

Where where should I look next? And it can give some suggestions, and then you, as the expert, get to decide whether to use them or not. But you'd be no worse off before in most situations as long as you're validating. Another really good thing is kinda, like, translating your sort of investigation notes into something customer facing. I'm very guilty of this.

Hayden Covington:

Sometimes my notes from my investigations are like somebody took spaghetti and then threw it really, really hard at a wall. It can help turn some of that mess into something that resembles English. So it can be pretty good at that as well. You have to be very clear, again, in these prompts what you expect the outputs to be. Here's another example prompt here.

Hayden Covington:

Again, we're outlining what its role is. We're explaining here's what you are. Here's what you should be doing. And then in this case, I've also given it a couple situations. Like, right at the start, I say, hey.

Hayden Covington:

If I give you alert data or logs, and I don't really say anything else, here's what I want you to do. Like, so that way it knows, if I just drop in some logs, this is the standardized response I'm gonna get I don't have to type in a sentence, say, hello. How was your day? I would like you to give me two to three pivot points to work from from here. No.

Hayden Covington:

That's just baked in. Kinda convenient. Maybe that's not your jam. That's why it's the starter prompt. The more customized it is to you, the better.

Hayden Covington:

But I can then also say, hey. If I start mentioning threat hunting, here's how I want you to help. And, again, those environmental variables, very important. Here's the platforms we're using. Here's the EDR we use.

Hayden Covington:

Here's the syntax language of our SIM. If you give it the documentation of your SIM, it can probably come back with pretty dang good queries in my experience. So they I I really wanna underline this. If you use this AI correctly, it will do an excellent job if you use them correctly. We have a an AI sort of built into our search on our customer facing portal where our customers can ask it how to query the data in the portal, and it can help them by building a query for them.

Hayden Covington:

So if they're struggling with a query language, it can help give them a query. And we've gotten some pretty glowing reviews from some of our customers on that. It's just you have to use the AI correctly and specific use cases that that make the most sense. Some other examples of relatively quick wins that you could do. You know, a CTI analyst agent doesn't have to be anything insane, like, long as it saves you some time.

Hayden Covington:

Report editing, that's where it gets a little more specific. These are gonna be really customized to you. Detection engineering, you can have this as, like, a shared project or a very quick win agent. If you go the full spectrum and basically take the code reviewer and give it a bunch of steroids, you can have it do full fledged detections for you. I'm gonna get to that.

Hayden Covington:

I have that as one of the build this month use cases, so I'll talk to that here shortly. But one of our folks has really taken the lead on that, and the things that they've built have been incredible. I could draft the threat detection from my phone. They would be that that easy. And I could do it you know, if I have speech to text, I can do it with my voice.

Hayden Covington:

It'll take me thirty seconds. And then a project manager I already talked a little bit about. Here's another quick win, though, before we jump to the build this month section. I've been using Raycast for, like, a year. If you don't know what Raycast is, if I press you know, on Windows, it's a different key.

Hayden Covington:

But if I'm on Mac, I press command space. Normally, that would open up the app launcher. If you set up Raycast, it opens up Raycast instead. So think of it like opening up a a utility. So I can open this up.

Hayden Covington:

I can type in commands and execute different applications or presets that I have or code. I put together some of the stuff that I've used on a GitHub repo a couple months ago. That repo is linked there. But a couple quick examples, I can type CTI and then, you know, drop in some context, and it'll have AI specific AI go gather me context based on a specific prompt. I can type in APT and then a number, and I'll get a profile of that advanced persistent threat group.

Hayden Covington:

Same with CVE lookups. I can type CVE and then enter the number, and it will come back to me with all the CVE details and links and resources. My favorite, stupid simple as it is, I can highlight any text on my screen and then press, you know, the action key, whichever it is that you set up, and then a letter key depending on which website I wanna go to. And I can then open that thing up inside of VirusTotal, inside of, you know, hybrid analysis, inside of spur, inside of whatever. So I could highlight an IP address, press command v, and then it opens that thing specifically immediately up in VirusTotal.

Hayden Covington:

Saves, I don't know, two or three seconds maybe, but it just is a a convenience thing, and it's a quick win. I'm a I'm a big fan of Raycast. I use it for a lot of things besides that. I can kick off coding agents from my Raycast, but that is just a a quick win. This is an example of the APT launcher here.

Hayden Covington:

As you can see, press command space, a p t, and then I type in one. I hit enter, and then it goes through and gives me a profile of that group using whichever agent I've configured to operate on that one. In this case, sonar from Perplexity with the reasoning built on top of it, so that way it gives me a a thorough profile. And then you can see there I have, you know, citations so I can actually go to the source if I really want to. Okay.

Hayden Covington:

I might be I might run a little long. Hopefully, Jason will fire me. But here's a couple things that we can build this month. These things are going to be powerful. You're gonna get big return on these, but some of them are going to be trickier.

Hayden Covington:

Some of them are going to cost you money. Sometimes a good bit of it if you wanna do them correctly. This one's pretty sick. I talked about this at the start, but case summaries and case titles don't have to be cut and dry. You can use AI to make them actually useful and actually functional.

Hayden Covington:

Sounds like a weird thing. Like, why does my case title need to be functional? I'll show you. So generic titles, they don't help anybody. But if you utilize AI correctly, you control it, you give it the right context, it can do a pretty good job.

Hayden Covington:

This one's gonna take you a couple hours to set up depending on what platform you use. It's gonna require some tuning and some testing to make sure it's right. Our SOAR makes this pretty easy, the SOAR that we use. So it doesn't it isn't hugely complicated in terms of just basic setup. Depending on your platform availability, it might be a little harder.

Hayden Covington:

But here's an example. High severity CrowdStrike alert. Great. Awesome. Okay.

Hayden Covington:

That's cool. That could mean a million different things. It could mean that, you know, ransomware is firing right this instant. It could also mean that somebody ran, you know, net user depending on your settings. Right?

Hayden Covington:

So that's not super helpful. If we have this context fed into our model and use it to generate a better title, Cobalt strike beacon detected on hostname, maybe that's even a DC, that's gonna catch your attention, I think. Definitely gets your attention better. Definitely helps you start to build context as the human analyst that is being augmented around what you need to be looking for, what you're going to see when you open this up, what your next steps need to be. Here's an example of kind of like the a very generalized alert title prompt here.

Hayden Covington:

This one is a system instruction. It's not an agent instruction. They're slightly different, so I'm not giving it a role. I'm not telling it you're a case title author. This is just a system instruction, basically telling it, hey.

Hayden Covington:

You're generating a concise alert title from this detail. Here's some structure examples of how to utilize it. Be very careful with your examples. We found that one time we gave it an example of, like, a real case name, and it started using some pieces of that in actual cases, which got a little confusing. We had to very, very, like, refine specifically what data we give it.

Hayden Covington:

And then be very clear. Like, hey. The title needs to do these things. Relatively straightforward, but it is a a do this month project just because of the technical requirements of, you know, being able to set this up. This one, to be totally honest, is one that I'm still trying to get to work how I want it to, but quality assurance of SOC tickets is a important thing.

Hayden Covington:

You need to ensure that analysts are including all the necessary data. You need to ensure that we're using clear communication with our customers, that we're giving clear recommendations. If we say, you know, hey. Here's a hash. We wanna make sure that we link them to VirusTotal or something.

Hayden Covington:

This one, I've struggled a lot with a couple things, and I'll be very, very honest with you about that. But this is a pretty powerful use case if you get it right. So you have two kinda options. I've kinda gone for both in my implementation of this, which is probably 75% of the way there. We can flag things preemptively, which is feedback as analysts are actually working.

Hayden Covington:

Let's say if they post a comment to a ticket, it could flag and tell them, hey. You included a bunch of screenshots of logs, but you never actually gave the query that you used to find them. That's part of our documentation right here at this link. Make sure that you do this. That's a preemptive flag.

Hayden Covington:

And then we could have another flag that when we click escalate on a ticket, it takes a second to just make sure everything's correct before it finalizes that. You probably don't want to hold it for QA in most cases as long as that's a noncritical issue. And my QA workflow flags things as critical versus trivial with some stuff in between. But you could have it escalate to the customer and say, hey. Should you have included x, y, or z?

Hayden Covington:

Those are kind of your options here. What it does catch, though, is a lot of those very common things that you try to document and try to get people to do consistently. But if we're in a stock investigation, it's not likely that we're also reading the documentation line by line as we go. So missing evidence, unclear explanations, inconsistent formatting, a lot of common things that you might have happening on your team. You can catch these with QA because, again, they're cut and dry.

Hayden Covington:

Is this in the ticket somewhere? If not, tell them to put it in. If it is, then great. We're good. That's that's a very AI able task.

Hayden Covington:

This one, I do have a role for. This is a reviewer. This, again, is a very simplified prompt. This one's a little bit wordier. A lot of this is stuff that I have in there for specific reasons.

Hayden Covington:

And if you were to use this, you'd want to tear it up and make it your own. But I found that very specific you have to be very specific. You have to give it examples. I I actually gave it a rubric of here's a rubric, like, of how you would grade a ticket. It makes it very simple for it to understand a threshold of good enough versus not good enough.

Hayden Covington:

And I then had to be very clear that you should not give this grade to the analyst because I found that sometimes it would do that. So this is a sample prompt for this QA sort of QA process. This one, again, is very tricky. At least, I found it to be very tricky. I'm sure if I zoomed out a little bit and used excuse, like, Claude, it would be a little bit easier.

Hayden Covington:

I'm dealing with some very specific very specific guardrails, I guess. Not necessarily guardrails. Limitations. That's the word. Have some very specific limitations with how I'm trying to do this.

Hayden Covington:

I'm trying to do it the a very specific way. This is the exciting one. Detection engineering. Very exciting. You can draft detections, the first draft of them, using an AI from start to finish.

Hayden Covington:

So I can send my AI agent for detection engineering a threat report. It can then go through the whole detection engineering process for us. It can draft the initial query. It can test it in our pipeline. It can make sure it conforms to the actual, like, syntax that it needs to be.

Hayden Covington:

It can even, you know, validate it against sample data and make sure the sample data is in fact matching the syntax that it just wrote. So you can go through this whole thing, and you can cut your basically time to, like, hitting the hitting the PR queue. You can cut that down to almost nothing. What would have taken a couple hours before, maybe a little bit less, but usually on average an hour or two, it can now take minutes. And you could do number of them at a at a time.

Hayden Covington:

You can have this AI go and generate these drafts. You have effectively given yourself a intern or a tier one analyst or whatever, however good your model is. You've given yourself somebody to go do this work. But just like all things, if I had an intern doing detection engineering for me, I would need to make sure that they're doing it correctly. This is, in a lot of ways, not all that different from that.

Hayden Covington:

It still requires me to validate their work, still requires me to make sure that they documented things correctly, but this is a very, very powerful work workflow. I cannot emphasize enough that this is probably the most powerful one that I'm talking about today. Again, starts from the input. Here's a threat report. Here's some IOCs.

Hayden Covington:

Here's a MITRE technique. We need to define what the output is. We need documentation for our SIM. How should you be writing these queries? How should you be naming them?

Hayden Covington:

What's our naming format? Where should they live? You need to have all that detail in there. And then oftentimes, the analyst is still gonna own that validation step, the tuning step, documenting and testing it depending on your your ability to deploy this. I'll brag a little bit even though I've only played a part in it.

Hayden Covington:

But the detection engineering agent that someone on our team has put together and that we've all kind of started utilizing does pretty much all of this. It just gets to the point where we have to look at it and sign off on it. It is pretty dang good. Again, I I won't harp on giving it the proper context. You give it the SIN that you use just like we have before.

Hayden Covington:

If you can give it very specific context on what it needs to do, very specific instructions, the chances of it doing a better job go up. I actually have a a Claude skill. If you don't know what that is, I have documentation and links at the end that will kinda explain that. But I have a Claude skill around specifically how Claude should create a GitHub issue for a new detection, what exactly it needs to include and how. So I have a very specific instruction for Claude whenever it does this so that way it gives exactly the context that our detection engineering agent needs.

Hayden Covington:

And here's that example prompt. Right? I promised it. This one is going to be very hit or miss depending on your setup. You need to tune this to you.

Hayden Covington:

You need to test it. You need to validate it. You need to spend a lot of time making sure that this works. This one can also benefit a lot from having, like, a validation checklist to make sure that it tested against mock data, that it tested it in your deployment pipeline, that it validated the syntax, ran a linter, like, whatever your steps are. Like, you need to make sure that you test this prompt thoroughly.

Hayden Covington:

But if you've already done, you know, the Claude projects and you've deployed them to your team, that's great. If you really want the most bang for your buck from a SOC, setting up a detection engineering agent properly will make you so much better. You can anytime a you know, let's let's just dream for a second. I promise no theoreticals. This isn't a theoretical.

Hayden Covington:

You could see a threat report pop up. You could have an agent that monitors this threat feed. It can go and use that Claude skill to draft up an issue and send it to this detection agent, which could then go start drafting this threat detection for this new attack vector that just dropped. Once it puts it into the GitHub pipeline, it tags your code reviewer agent, which goes and codes review code reviews it. By the time you show up as a human being that's very busy with all the human things that we have to do, like drinking water, once you get to that point, you can show up, see all of the work that has already been done, and make sure that it's right.

Hayden Covington:

You have not had to spend what would have been hours doing all of this grunt work and Google searching and all these other bits and bobs. You show up to something that has already been researched, and you just need to sign off on it or ask for changes. This is high value. I wanna emphasize that. And then last couple slides here before Jason gives me the isn't that a thing where you stick out, like, a the hook and, like, pull someone off stage?

Hayden Covington:

Isn't that a thing? I don't know. I'm relative. It's a thing. Okay.

Hayden Covington:

Thank you. That's a thing. Jason's gonna give that to me if I don't finish up. Last couple slides here. AI is gonna fail in a couple key areas, maybe not always, but occasionally.

Hayden Covington:

Real time accuracy, sometimes AI is going to believe its training data versus what might have changed since then. Careful with that. It can believe something is still how it is, and maybe, you know, things have changed drastically. I've had plenty of, you know, agents that have said, you know, oh, this is how you, you know, best secure or utilize this tool, and that's like a version ago. Right?

Hayden Covington:

So they all have a point in time where they have to be released. You need to make sure that if what you're doing is time sensitive, they are actually doing, you know, validation of what is most recent. And then a lot of the rest of these are around human judgment. So final escalation decision, you as the human should be the one that makes the final decisions. That is very important because if there is no one watching these machines, they could mess up, and we would never know.

Hayden Covington:

Things that are they've never seen before, if they don't have context around it, they're not going to know how to respond. At the end of the day, they are a very smart autocorrect is one way to put it. And if something is new and novel and has never been seen before, they may not know how to respond. Another one, again, replacing analyst intuition. If you've been in a SOC long enough, you know what it is I'm talking about, and it sounds like very mystical.

Hayden Covington:

I've been in a SOC long enough that sometimes I just get a feeling that something's wrong. Like, I'm looking at logs. I can't find anything that's off, but I just have this feeling. Right? Something doesn't seem right.

Hayden Covington:

Everything seems on the surface okay, but I'm gonna talk to the customer about this. And that has paid off a few times. So analyst intuition, the human spirit, indomitable. Right? The good thing is most of these problems can be mitigated even if by, you know, very creative measures.

Hayden Covington:

Each of these has an inverse. You know, the real time escalation, I put them in the description there. For the final escalation decisions, you need human judgment. A novel thing that I'll drop on you right before this last slide is that I've seen people that utilize agents that then converse with other agents in a council. So you have Claude Code, and it goes and talks with Claude Code or OpenAI Codecs, and they compare notes and then come back to you with the final decision.

Hayden Covington:

Pretty sick. Pretty sick. It's like you then have two interns, and so they're they're chatting to make sure they're checking each other's work. Key takeaways, AI is a force multiplier. You need to train your team first.

Hayden Covington:

Otherwise, they will not know how to guide these agents. They will not know how to correctly guide these AI to the correct decisions and make sure that the decisions that the AI makes are correct. You need to address these blockers, your cost, your policy, your buy in, but you do need to start small. If you leave today and all you do is make custom agents that are shared across your team, that is a win. You have won in that circumstance.

Hayden Covington:

You've gotten something out of this, and you have some sort of positive output. That said, you still need to keep your humans in the loop. That is not optional. These are not omnipotent machines or whatever you wanna call it. They make mistakes.

Hayden Covington:

You need a human. And, again, I'll challenge you. Take something that I talked about today and go implement it and play with it and update your your own little prompts here and just go through and get some value out of this. I think a lot of you will be surprised about how, if you use these tools correctly, how powerful they can be. I have some links.

Hayden Covington:

They are in the slides, which are in the Discord. That's the Raycast GitHub repo, and then that's a link to a Notion page that just kind of covers a lot of what we talked about today. I have things that are copyable. I have links to, you know, Claude skills and Claude hooks and different AI things that I might have mentioned that, you know, if you're more advanced, you can go still learn some stuff. And I have a quote.

Hayden Covington:

This one upsets some people sometimes, so prepare yourselves. The cost of getting to know AI is at least three sleepless nights. What does that mean? That means if you really wanna understand how to utilize AI, you're gonna have to put in time like everything else. You're gonna need to spend some time playing with it, spend some time using it, find out what does and doesn't work.

Hayden Covington:

It's like all things. You need to practice it, and it's a tool. So you it is not it is not exempt from that. And then that is it. I ended two minutes late, technically, but I'm sure Jason's Jason's here to berate me for that.

Hayden Covington:

Technically? Technically. I mean, you wanna get technical.

Jason Blanchard:

I just wanna say, I think everyone can now see why I like Hayden so much.

Hayden Covington:

Is it because I

CJ Cox:

No. I'm not sneaking it. So

Hayden Covington:

CJ famously is here to heckle me. I'm pretty Yeah.

Jason Blanchard:

Also, Patterson's here. Patterson joined in the back end just to watch. He wanted to attend the webcast, and so I I asked him to come in. So, Patterson, if you're here hello. Hey, Patterson.

Jason Blanchard:

Are you here? Can you hear me, Patterson?

Patterson Cake:

I I am here. Can can you hear or see me?

Hayden Covington:

Yes. Patterson also loves to heckle me. I'm sure. So

Jason Blanchard:

Patterson's got an eight eight hour workshop on incident response simplified on April 3, part of the SOC Summit. I did put the link to the SOC Summit into the the Zoom chat. It is a six hour event that we're doing on March 25. Hayden will be there. It's during RSA.

Jason Blanchard:

So if you're not at RSA, this is where you can be. So we're doing our not at RSA event. So And you can join us at the SOX Summit. It's gonna be six hours. We got so many great speakers.

Jason Blanchard:

It's a free event to register for the summit. There is some paid training that comes after if you'd like to take. Hayden's gonna have a class there. This would be great. So Hayden, if you could sum up everything that you talked about today in one final thought, and then we're gonna do some q and a, what would that final thought be, Hayden?

Hayden Covington:

My final thought would be that AI, like everything that we deal with, is a tool in some way. And so if you harness that tool correctly and you have the knowledge to use it correctly, you will get a lot of benefit from it. But that said, this tool is a little bit different than a lot of the ones that we're all familiar with. So it takes a little bit more to understand. It can be a little bit unpredictable, but that also means that you can also get potentially significantly more gains from it.

Hayden Covington:

So all that to say, you need to spend some time learning it, spend some time understanding it. But if you can really figure out how to use it correctly, you will be surprised.

Jason Blanchard:

Thank you, Hayden. Thank you everybody for joining us. Hopefully, you you join us for a workshop or a training class in the future. I mean, that's that's why we're here. We want you to see who we are.

Jason Blanchard:

We want you to understand the kind of knowledge that you would learn in our training classes. We want you to do it for free so you know what you're getting yourself into. Because we don't wanna just, like, you show up and you're like, what is this? Who is this? I don't understand what this is.

Jason Blanchard:

And so we do these as a way to introduce ourselves in a way for you to feel comfortable with the decision you're going to make about where you get training from. Because we know that whenever someone's like, hey, Jason, you wanna take training? I'm like, from where? Right? Like, I don't know.

Jason Blanchard:

Like, from who? Like, what am I supposed to learn? So this is helping you, like, well, what am I interested in? What brings me joy? What do I wanna learn from?

Jason Blanchard:

Who do I wanna learn from? And so that is what this is for. Alright. So thank you so much for joining us today. We're gonna stick around for some q and a, just a little bit of q and a.

Jason Blanchard:

CJ, if you could queue up some questions that you think would be good for Hayden. And then Patterson, before we do that, Patterson, what is the thirty second pitch for your upcoming eight hour workshop?

Patterson Cake:

I'm gonna correct everything that Hayden told you that was incorrect today.

CJ Cox:

Both of them. Oh, god. No.

Patterson Cake:

Just kidding. Thirty second pitch. Honestly, I get an opportunity to take my my rapid triage workflow and expand it to eight hours, which I'm super excited about. We've talked about pieces and parts, investigative components, and we're gonna spend eight hours studying a real world business compromise case and applying the rapid triage investigation process start to finish for, again, simplified incident response is the is the goal and the outcome.

Hayden Covington:

You gotta listen to him. He has really cool background lights.

Jason Blanchard:

Also, like, Patterson, I I first met Patterson a long, long time ago when I was at the Sands Institute. And, like, as soon as he spoke for the first time, I was like, that guy. I wanna learn from that guy. So I've known Patterson for a long time. He's fantastic.

Jason Blanchard:

He's a great speaker, and he loves to, like, give his knowledge away. So please join us for that. If you have not yet checked in for Hackett, and you're like, man, there's so much to do on a Black Hills or an anti siphon webcast. There's just so much. There is.

Jason Blanchard:

We wanna make sure that you get checked in for Hackett today so that we can send you your reward when you hit ten, twenty, thirty, forty, and fifty. And 100. We just added 100. So alright. CJ, what's your first I

CJ Cox:

got questions, but first, I got a little thing. A bunch of people put their hands up in the Zoom, you know, webcast thing.

Hayden Covington:

Mhmm.

CJ Cox:

I messaged them to see if they had a question or something, and they didn't respond. So if they've got a question, get it in the Discord below. So here's a question from Chad Cheddar. They're doing AMA. He says, do you see a problem with analyst level of knowledge going down as reliance on automation such as AI becomes more widely deployed?

CJ Cox:

Seems to me that analyst and responder hands on time will necessarily decrease.

Hayden Covington:

Yeah. I I have that question pull up on my iPad. Yeah. That is a concern of mine is that the more automation and AI that we have, the less hands on keyboard time that analysts will have, meaning the less they'll understand these concepts, meaning the less that they'll be able to validate that the AI is correct. I I don't know the correct solution to that yet.

Hayden Covington:

If somebody knows it, please please share it with me. But that that is a concern that I've had.

CJ Cox:

I've I've got it.

Hayden Covington:

I okay. Well, real quick. I will say, this isn't a new problem. It's like SOAR has come out, and SOAR automates a lot of crap anyway. And so this isn't anything new.

Hayden Covington:

SOAR is automating a lot of the the garbage. What I've generally tried to push in the SOC as much as I can is we will give you access to these tools once you prove that you know how to validate that what they say is correct. What have you got, CJ?

CJ Cox:

Yeah. Cloudstrife also kinda had this in there, and that's the thing that people will do things wrong. They won't use tools properly. I've been around for a long time. I I go back to total quality management and a million things I've read, process reengineering.

CJ Cox:

Businesses will do everything wrong. They will take great new ideas and they will misapply them. This comes with the territory. The reason we get together and I see people in Discord is we're sharing that knowledge and trying to come up with plans for how to get your organization to do right. And you guys just listening to this, listening to pitfalls and points of how to do it right and sharing the how to's, that's how we overcome that.

CJ Cox:

Because businesses, because they're human beings and AIs, will do things wrong.

Jason Blanchard:

Yeah. Oh, also to the 100 people that trusted the link that I put into the Zoom chat to get the free InfoSec Survival Guide for the incident response edition. Thank you. To to trust a link called Spearfish General Store, and then put your actual information into it so we can mail it to you for free. If you aren't in The United States, you can get it for free at promazine.com.

Jason Blanchard:

I did put it in the Zoom chat. Feel free to order those. We love mailing them out. It's not a catch. There's no there's no, like, gotcha.

Jason Blanchard:

It's just that's it. We wanna send you something for free, something educational. And Patterson here helped pretty much develop that entire guide. And so Patterson is is our incident response subject matter expert, and it is like his brain and the community's brain all in this one thing that you get for free.

CJ Cox:

I have a reason to reassure people that our QR codes are not hacks. It may be a little cynical, and I floated it out at the Denver OWAS meeting the other night. And that's that you know it's not a hack because we'll only hack you if you're paying us. Is that too simple? Yeah.

Jason Blanchard:

Yeah. I like that.

Hayden Covington:

Yeah. But if you did sign up for that, enjoy your fish training.

Jason Blanchard:

Is this is is is Any other questions, CJ?

CJ Cox:

I you know, I didn't see a whole bunch. I usually flag them. Chad's was a good question. It was a really

Hayden Covington:

good question.

CJ Cox:

Had some really good points going through.

Jason Blanchard:

This was a question. What is the minimum hardware required to locally host an AI solution that is sufficient for SOC operations?

Hayden Covington:

Oh, boy. Packed for SOC operations locally? I mean, it's it depends on your tolerance. It could be, like CJ said, your MacBook Air. It could be, like, eight GPUs.

Hayden Covington:

It depends on how long you're willing to wait. For, like, a SOC, if you wanna utilize AI in a SOC, I put that thing on the cloud and just buy, like, a Cloud Team's license for everybody just so that you're not, yeah, you're not having to deal with the actual infrastructure requirements. But, you know, it's that that's, like, that's a really hard question. Like, how should I set up my server? I don't I don't know.

Hayden Covington:

It depends on how much I'm

CJ Cox:

gonna use it. Trying to find another question. I don't see one.

Hayden Covington:

Yeah. Somebody somebody says, looking forward to Denver. Yeah. We're doing Wild West Hackenfest Denver in a couple weeks, which is super exciting.

Jason Blanchard:

I was just thinking about how no one's asking questions because they're just asking AI.

Hayden Covington:

Yeah. It means that it's such a good job that everybody either understands it, or I did such a bad job that everyone's like, I'm done listening or tuning out. Yeah. Great job. And it's like how I listen to, like, lectures in college.

Hayden Covington:

It's like, yeah. I'll do the test later. I'm not gonna be here because I have to be.

CJ Cox:

Yeah. Why don't you ask AI when you can ask AI? Oh, gosh.

Jason Blanchard:

So, Patterson, you know, someone asked this question earlier. They would love to see a webcast just like this about AI use but in incident response. Are you currently using AI in your incident response work?

Patterson Cake:

I don't believe in AI. I think it's a passing trend. And no. I I think we are actively testing it and and actively using to validate investigations. So so yes.

Patterson Cake:

Yes and yes. And ongoing testing and testing locally and testing with with Claude. And one of the outcomes and and teaser trailer for my eight hour class. We'll talk about it a little bit. We'll talk about how rapid endpoint investigation, the way we extract and evaluate triage data, honestly sets us up really nicely for some interactions with AI to parse, process, validate findings.

Patterson Cake:

So so we're getting there. Stay tuned. Ted, as

CJ Cox:

a follow-up, would you recommend Claude for Teams for small scale LLC consultancy?

Hayden Covington:

Yeah. I mean, Claude for Teams is fundamentally different than, like, Claude Pro. The Claude for Teams just allows you to share across different users and stuff. But, yeah, I would definitely recommend Claude just in general, I think. Like, the amount of things that you could hook it into now, like, they have a native plug in for Excel, which is very weird, but also kinda crazy how how good it works.

Hayden Covington:

But, yeah, I I recommend even for, like, just small companies. Like, you can you can get a lot of really good benefit out of it. Oh, someone else asked another one. Are you training your own? I think everybody is.

Hayden Covington:

Everybody has their own data that is unique to them. And so if you're in the SOC space and you have very unique data, you I think would be you would be wasting wasting some potential if you are not considering training a custom model around what is malicious activity and what isn't.

CJ Cox:

What one was oh.

Patterson Cake:

I'm sorry. One was really quickly. Part of the output of my Adar class will be some some some case data, some test case data, which is a fantastic thing to pull and play speed to AI because it's not private. And we will will share that with you as part of that course so you can take that and practice with it, play with it, test, train without fear of exposing sensitive data, which is huge.

Hayden Covington:

Yes. If you if you have, like, a dataset where I imagine Patterson walks you through the investigations and how to, you know, actually know the end result. But if you can validate what the end result actually is and give it to an AI and walk through that, if you know what the end goal is, like, the end outcome is, you can use that to test to make sure that the AI is correct, practice how to, you know, query in different ways. Like, yeah, it's like you're solving a puzzle, but you already know how to how it's completed, so you can solve it however you want.

Jason Blanchard:

Alright. I think that's it. Any any last ones before we wrap up for today?

CJ Cox:

What EDU resources do you recommend for learning prompt injection techniques for internal IR teams?

Hayden Covington:

That, I think, is a webcast. I I I would check out I don't know I don't know a specific reference. I would check out our blogs or some of our other anti siphon instructor instructors. I know a lot of folks are focusing around how to actually abuse AI because, I mean, everybody's connected into everything, so it's a huge attack surface. We just talked on the Black Hills newscast on Monday about Claude Bot and how if you can inject that thing, you know, you're getting it to send emails and, you know, you can access somebody's entire life if they've connected it to it.

Hayden Covington:

So that one, I I would I don't have any specific references. And then someone else asked Luke asked, do you found that the basic subs for Claude and other vendors are not enough to truly learn? Yeah. I I I was on Claude Pro for a while, and I hit their usage quota about every four hours, which is when it resets. So even now in Claude Max, on, like, Wednesday, I'm at, like, 50% of the weekly usage quota.

Hayden Covington:

So it's it's dependent on how much you use it. So, unfortunately, yeah, the cheap plans for chat GPT are usually okay. But for, like, Claude, I found that their pro plan is not enough.

Jason Blanchard:

Yeah. Last one, because I think this one leads into one of your classes, Hayden, is how do you manage new detections as compared to already implementing detections or already implemented detections? More isn't always better. Old and new attacks frequently share the same IOCs.

Hayden Covington:

Yeah. That's a good one. I talk about that a lot in my detection engineering workshop and in my class, but I've I've kinda adopted this method of thinking is you can have some detection overlap. Ideally, you want your detections to be distinct. Otherwise, you'll potentially get multiple alerts for the same activity.

Hayden Covington:

But once you add a layer above that not necessarily talking about AI. In this case, it's mostly happening on the SOAR level for us. But once you have a level above that deduplicates and combines different alerts into a singular case based on a singular entity, you can find that if you fire alerts for multiple similar but distinct activities, that can bubble up that alert to be more significant more quickly because you may have an alert for, you know, things that are adjacent but not quite the same. What you generally wanna avoid, though, is alerts that are gonna fire multiple times for the exact same command line strength, for example. Like, if you're picking different parts of a command line.

Hayden Covington:

But if you're firing on a different a couple similar command lines, that could still be pretty powerful.

CJ Cox:

Yeah. Hey. Hey. Real I can't find Cali. There it is.

CJ Cox:

What LLM would you recommend for local SOC AI agent on a 16 gigabyte VRAM with some CPU offloading?

Hayden Covington:

I I mean, that changes every week, unfortunately, is the answer. I heard I'm pretty sure I'm gonna Google in my side screen. I'm gonna try to. So I think it's Kimi k 2.5 is a new one. You wanna look into obviously, I can't attest for the security of these models.

Hayden Covington:

But, supposedly, this is an open source model that I was reading about this morning, which supposedly, again, benchmarks pretty comparatively to, you know, the Claude to the Gemini. Apparently, it's pretty powerful. I have not tried it, but from what I've heard, Kimi k two is is pretty pretty good.

Jason Blanchard:

Alright, everybody. I gotta run. I got another thing to get to. We're developing another deck of backdoors and breaches. So we got a bunch on the way.

Jason Blanchard:

So if you like playing backdoors and breaches, we have new expansion decks. What was that look for, CJ? I'm always making stuff. Alright, everybody. New one.

Jason Blanchard:

Alright, everybody. Thank you so much for joining us today for this anti siphon anticast. We appreciate you. We know that you could have done anything with this time. You decided to spend it here.

Jason Blanchard:

And so, Hayden, thank you for sharing your knowledge with the community. Hopefully, people will show up for your workshop and your upcoming class. And, Patterson, thanks for being here and learning something new and also telling us about your upcoming workshop. Join us at the Sock Summit. It is free.

Jason Blanchard:

It is six hours, and it's gonna be awesome. CJ, thanks for being here, being that other voice on the webcast and helping to ask good questions. Appreciate you. Appreciate your leadership, and appreciate you supporting all the things that we do here in the contact meme.

CJ Cox:

Funny to get me to stay.

Jason Blanchard:

Thanks, everybody. That's it. Bye bye.

CJ Cox:

Farewell.

Jason Blanchard:

Megan, kill it. Kill it. Little fire.

Episode Video

Creators and Guests

Jason Blanchard
Host
Jason Blanchard
Jason Blanchard has been happily adopted into the hacker community at Black Hills Information Security (BHIS) since 2019, even though he “works in marketing.” He’s had every dream job imaginable: teaching filmmaking, owning the world’s most famous comic book store, and fostering the infosec community efforts for SANS. While some at BHIS call him the “Director of Excitement,” he is formally known as the Excitement Co-Creator. In his day-to-day work of “sucking at capitalism,” Jason enjoys helping others, sharing his knowledge, and giving away lots of free stuff. When he’s not working, Jason spends time with his wife and daughter, hosts a semiweekly job-hunting Twitch stream, and enjoys writing short stories and performing stand-up comedy.
CJ Cox
Guest
CJ Cox
CJ Cox is the Chief Operating Officer for Black Hills Information Security (BHIS). He joined the team in 2016 and is responsible for managing the day-to-day operations and business capture of BHIS. CJ has over 25 years of experience in the IT industry as a systems administrator as well as an information system security officer, manager, and engineer. CJ feels that this is his dream job and that his favorite parts are the people he gets to work with and making security better. He is a retired Marine reservist and father of 4 who enjoys skiing, camping, golfing, and playing chess in his free time.
Hayden Covington
Guest
Hayden Covington
Hayden Covington joined Black Hills Information Security (BHIS) in the Summer of 2022 as a SOC Analyst. He chose BHIS after hearing many great things over the years and seeing the quality of work, as well as finding people who have the same passion for the field as he does. His favorite part of the job so far has been the community. Previously, Hayden worked in a SOC for a Naval contractor, where he also served as their SOAR project manager and SME, as well as insider threat lead. When he’s not working, Hayden can be found doing anything athletic (like triathlons!), as well as enjoying video gaming and Formula 1.
Patterson Cake
Guest
Patterson Cake
Patterson Cake joined the Black Hills Information Security (BHIS) pirate ship in June of 2023 as a Security Consultant focusing primarily on detection engineering and digital forensics and incident response. He chose BHIS because, to paraphrase, “doing cool stuff with cool people” and “making the world a better/safer place” is exactly how he wants to spend his professional time and energy. It also helps that he has a bit of history with a couple of awesome folks that have been with BHIS for many moons. Prior to joining the team, Patterson helped build and lead a DFIR practice for an MSSP, worked as a senior security engineer for AWS Managed Services, and spent several years in enterprise cybersecurity, often healthcare related, focusing on intermingling offensive security and incident response in technical and leadership roles. Outside of work, he enjoys spending time with his family, which often involves motorcycles, outdoor sports, movies, and music.