Cyber Compliance and Beyond logo

Episode 7

AI and Cyber Compliance

Share
AI and Cyber Compliance

About This Episode

Podcast Episode 7
October 1, 2024 - 45 mins

AI is bringing speed and velocity never seen before. Some studies show that the output is the equivalent to what 35-40 humans can produce. This speed and velocity is applied to countless use cases across just about every economic sector. Cybersecurity compliance is laden with repetitive, redundant, and time-consuming manual tasks. While humans bring nuanced ingenuity and problem-solving capabilities, we are prone to errors, especially across such repetitive, redundant, and time-consuming tasks. Worse, cybersecurity compliance requirements are far from standardized, though there is a tremendous amount of overlap. In these circumstances, humans take short cuts. It’s not a matter of whether short cuts result in errors, only how many errors. The real power of AI in the world of cybersecurity compliance is the ability to bridge all gaps of compliance documentation with minimal to no errors. Furthermore, AI can then be trained to leverage compliance documentation to code and perform actual tasks within a system. In the world of cybersecurity, AI opens the doors to a world in which security truly is baked in from the beginning.

Today’s guest is Nic Chaillan, technology entrepreneur, software developer, cyber expert and inventor. He has over 23 years of domestic and international experience with strong technical and subject matter expertise in cybersecurity, software development, product innovation, governance, risk management and compliance. Specifically, these fields include Cloud computing, Cybersecurity, DevSecOps, Big Data, multi-touch, mobile, IoT, Mixed Reality, VR, and wearables.

Links:

Microphone

Podcast use is subject to Kratos Terms.

Subscribe via email for the latest podcast

Get email alerts on the latest episodes

Episode Transcript

Cole French:

AI, that’s what everyone’s talking about these days. While it’s actually been around for quite some time, its presence has exploded of late. It might be the most heavily used buzzword, and feelings about it range from excitement to fear. You won’t want to miss this conversation where we discuss real world uses of the game changer that is, AI.

Welcome to the Cyber Compliance and Beyond podcast, a Kratos podcast that brings clarity to compliance, helping you leverage compliance as a tool to drive your business’s ability to compete in any market. I’m your host, Cole French. Kratos is a leading cybersecurity compliance advisory and assessment organization, providing services to both government and commercial clients across varying sectors, including defense, space, satellite, financial services, and healthcare. Now let’s get to today’s episode and help you move cybersecurity forward.

It feels like AI is in every conversation these days, and for good reason. It is a game changer. Like a lot of game changers before it, AI and its close companion, machine learning, is not new. It’s been around for quite some time. Sometimes game changers explode overnight, while others simmer steadily and are sparked by a tipping point. We’re watching AI’s tipping point unfold. The scale and velocity of AI’s capabilities are truly staggering.

In one real world example you’ll hear about on today’s episode, AI was leveraged to complete the work of 35 people in less time than it would take for those 35 people to complete the work in the real world. This is just one of countless use cases, but just about anything that’s reached and exceeded a tipping point is accompanied by debates about the good and the bad. For AI, the range of debate is vast, from data ownership and access, to the impact on the job security of humans, to the ethics surrounding use across the rapidly growing use cases.

There’s the good too though, and joining us on today’s episode is Nic Chaillan. Nic is a technology entrepreneur, software developer, cyber expert, and a mentor who’s worked with cloud computing, cyber security, devSecOps, big data, multi-touch, mobile, IoT, mixed reality, VR and wearables, just to name a few. Most recently, Nic founded Ask Sage, bringing generative AI capabilities to government. Nic and I wade into the shallow parts of AI’s dark side, but we focus primarily on the power of AI in the world of cybersecurity compliance. We hope you enjoy this episode.

Nic, thank you for joining us here on the Cyber Compliance and Beyond podcast. I’m really excited for today’s conversation. AI seems to be a part of every other conversation these days, and while AI isn’t actually new, it has surpassed the level of sophistication necessary to be a major disruptive force. So the question is, and I think what we’ll discuss today is, how can we positively harness the power of AI? So we’ll just kind of start with AI in general. So what are your thoughts on AI, the state of AI, where AI is heading? Just any of your thoughts, your perspective, if you could share that with us.

Nic Chaillan:

Thank you for having me. It is very interesting, right? Because when you see the pace of AI, it’s something pretty crazy. People have a tough time keeping up. A lot of people dismissing it, and I think that’s a mistake. I think it’s already proving that it’s going to change the way we do business, and particularly in cybersecurity and compliance is going to be a must, to be able to compete and move at a place of relevance and be able also to fight back and push back against all the new type of attacks we’re going to see, malicious actors use against us using AI.

And so generative AI in particular is pretty crazy. What you see with the latest model that came out from OpenAI, the model o1 came out a week ago from this recording and so it is been pretty insane to see what you can do with this new model, in offense and defense use cases, playing catch the flag, and a lot of kind of cyber research, and finding ways to not just discover issues but also get access and foothold into systems. So it’s been pretty scary, I’m not going to lie. And so anyone not paying attention and anyone not putting a lot of energy in understanding what you can do with this is going to be behind pretty quick.

Cole French:

I can definitely see why that would be the case. I feel like, I mean, some of the stuff you just mentioned, it makes me feel like I’m behind already because there was just things that I wasn’t even aware of, particularly that a new model had come out recently.

I did want to pull the string on one thing that you said. You mentioned some people want to dismiss AI. Why do you think people are wanting to dismiss it? What do you think are the reasons behind that?

Nic Chaillan:

I think the biggest reason is fear of the unknown. People prefer to put their head in the sand. And also, when your job is directly impacted and potentially replaced entirely by some technology, the first reaction is going to be to dismiss it because it seems impossible until it happens. Right? And so unlike the automotive industry where that kind of displacement took years because of the cost of the robotics, in this use case, you’re talking a 30 bucks a month plan on something like ChatGPT or Ask Sage or whatever, and you end up with tremendous augmentation of velocity. The average we see on our stack is 35X increase, so one person turning into 35 people. You can imagine that by definition there’s going to be impact on jobs, there’s going to be impact on velocities, there’s going to be impact on people that don’t use it to compete. I would argue it’s probably impossible to compete without it. Of course I’m biased, but I think it’s still true.

And at the end of the day, what you’re going to see is two, maybe three sides, right? Maybe even four, but the first side is people dismissing it and pretending it’s not happening, those will pay the big price. Then you have people trying it for five minutes and they give up when it gets a little bit harder and they don’t get outcomes they are seeking, and they dismiss it and that’s not good enough. And really, when people come talk to me and say, “Hey, I tried to do X, Y and Z, and it didn’t work.” I tell them, “Well, blame yourself. You didn’t do it right.” And so that’s two, right? And hopefully they go back at it and try a little bit harder and don’t give up and learn and spend less time on TikTok and watch maybe more training videos on GenAI. And then three, there are the kind of people that are using it on a day-to-day basis, but more basic tasks and they’re learning slowly but surely, and that’s fine. Right? And then you have the expert that really take it to what the next frontier is going to be.

When I started, we found limitation of the technology and slowly but surely we were able to push it to the right and do more with it and kind of overcome limitations that most people thought would be a deal breaker for the technology. And so people that think outside the box and find a way to scale and bring more capabilities with the technology is going to be where you want to be.

Cole French:

Yeah, absolutely. We’re actually working right now, we have a working group set up to grapple with some of those exact things you just described as, how could we use this? How could we leverage this? What would be the benefit? Trying to think outside the box. And I will say that is definitely, that can be challenging, to think outside the box. And I definitely want to talk more about that and specific use cases with that, but one thing I want to sort of dive back to. You’d mentioned that the new AI model was kind of scary. So I’m just curious, what makes you say it’s kind of scary? Is it scary, is that a good or a bad scary?

Nic Chaillan:

Well, it’s both, I guess

Cole French:

It’s a little bit of both? Yeah, that makes sense.

Nic Chaillan:

Well, like any good technology, there is going to be malicious use of it, right?

Cole French:

Absolutely.

Nic Chaillan:

And so when you look at o1, the big difference with the GPT-4o and o1, is the fact that the model is thinking and reflecting before giving an answer. So it’s a little bit slower, but it gets to much better outcomes, better than PhDs in many fields. In coding, it’s way superior to human coders. And so in cyber offense and defense, we find that the bot is going to be able to find more issues than any human combined. And so I think it’s pretty scary.

Obviously, malicious actors are going to start using the stuff and find a way to use it to go after offense use cases, versus us trying to keep up with the defensive side and having a tough time selling without it, I don’t think you can. And so it’s kind of good and bad, right? But the velocity increase of people with this technology is going to be pretty mind-boggling. I think we’re going to get close to 50X, 5-0.

Cole French:

Yeah, you mentioned 35X. I’m just curious if you have any, sort of a real world, I think we hear 35X and can say, okay, that’s the equivalent of 35 people. But any real world examples you’ve seen out there? When you say there’s a 35X velocity, what does that look like? What’s a real world example people could tie that back to?

Nic Chaillan:

Yeah, that data is based on our customers velocity increase that we measured. We have 14,000 government teams and 2,500 companies using our product today. And so it’s the average, it’s not even the best. Right? And so if you look at our teams here Ask Sage, we estimate we would’ve needed 42 developers to do what we built with Ask Sage on the coding side, and instead of two people. And now we’re closer to 42X velocity now, it started at 10 and then 20 and then obviously we’re getting better. And we’re probably the most mature team there is, or one of the most mature team there is to use the technology to automate what we do. And the entire life cycle of software development and cybersecurity as well.

And so we, 90% of Ask Sage is built by Ask Sage. Our entire authority to operate package for FedRAMP High and DoD IL5 was created by Ask Sage, with our product ATO In a Box in two weeks for $2,500. Really, it took seven days to generate the package and seven days to read it, it took a week to read it, 97% accuracy. So we have examples in every field there is, any non blue-collar job I can give you numbers, backed by real data, real customers, real outcomes.

Cole French:

That’s pretty incredible. So when you say that it took a week to read through all of the material, was that a human reading through the output of Ask Sage?

Nic Chaillan:

Yeah, exactly.

Cole French:

Yeah, okay, okay. And 97% accuracy?

Nic Chaillan:

Yeah, and honestly it’s because we forgot to train up with backups and so they made stuff up on the backup side. It started, we were doing backups once a day and we were doing it way more than that. And so it was wrong, but it’s just because we forgot to train it. So it’s not even the AI’s fault, really.

Cole French:

Yeah, makes sense. And before, I definitely want to ask and talk more about Ask Sage and some of those use cases, especially as it relates to security compliance. I know here at Kratos, we focus primarily on security compliance, as you mentioned, FedRAMP High. We also do CMMC and some other frameworks here and there as they come up, and definitely want to talk about how AI in general, Ask Sage in particular, helps out with compliance.

But before jumping into that, just wanted to ask if there’s any other areas. So I really think AI is a major disruptive force for good when it comes to security compliance. I think we’re still figuring out what that is and maybe you probably think we’re further along than most, because you’re working with it on a daily basis. But just curious, are there any other areas where you see AI as a major disruptor in a good way?

Nic Chaillan:

Well, before I said that, you’re not just further along. We picked the Kratos team initially because of your willingness to think outside the box and you helped us get our DoD IL5 and FedRAMP High ATOs. And we did it in seven months from start to finish, which is pretty unheard of. But more importantly, your team was not scared of using, or at least consuming GenAI documentation. And so that’s been a breath of fresh air and that’s why we work with you guys.

But in term of other outcomes, I guess, it’s pretty incredible, right? Because you pick every non-blue collar job, I can tell you different stories, but you take contracting or acquisition, we respond to government bids like a lot of companies do. And for Ask Sage, we went from five days to 32 minutes to respond to a bid now. Right? So again, kind of mind-boggling. It’s able to be trained on all the Ask Sage past performance, and it’s able to write the response accordingly and follow the government template. We build a product that’s able to actually directly integrate and generate Word and Excel documents, and so it’s able to accurately, iteratively use GenAI to fill sections of the document and then fill itself, including filling tables and all the nightmare of the system security plan of FedRAMP High and all that stuff, with all the tables and check boxes and checking the control origination and implementation details. And the bot is able to iteratively fill each of these check boxes and tables without tampering with the design of the Word document, and save it as if a human filled it. So, pretty mind-boggling.

But if you look at contracting, acquisition, coding, every new feature of Ask Sage is built by Ask Sage. For example, we have a complete integration into the GitHub and if a customer has a future request, they can go create a new ticket and then the bot is going to take the description of the ticket, improve it with better descriptions because they usually don’t know about how Ask Sage is architected. So the bot is going to look at all the microservices we have and it’s going to be able to update the description accordingly. And then once a human marks it ready to code, then the bot is going to write 90% of the code. Human is going to go tweak it, fix it, whatever, and then ask it to do a pull request and the bot is going to be able to create a pull request into GitHub, and the bot is going to create all the unique tests, integration tests that goes with that as well. And so it’s kind of a mind-boggling game changer to increase the velocity of teams.

We also have teams that use it to modernize existing code base from one language to the other, and 90% of the code right out of the bat can be converted to a new language, cloud native language or whatever, and from legacy. It’s yeah, I mean, I could give you 20 examples.

Cole French:

If I’m not mistaken though, the example you mentioned of contracting and I mean, it is a very tedious process and of course, no two RFPs or solicitations are the same, even though they’re very similar. So it takes out some of that manual process, which is prone to errors by the way, because humans like to find shortcuts, but shortcuts don’t apply to every single case. So you apply those shortcuts, but you have to make sure you go back. So, obviously I can see the benefit there, but I’m also thinking on the other side of it, on the government side, reviewing all of those submissions. I’m assuming that AI is a tool that can be used on that side of the aisle as well, to review all these submissions. Sort of condense the information down into what the folks that are reviewing these contracts are going to be looking for. Is that right?

Nic Chaillan:

Yeah, not only that, we have what we call acquisition in a box, which really helps the government create the entire lifecycle of acquisition, from writing the RFIs, RFPs, scope of work, requirements, all the way down to DON selection and grading bids received and comparing value proposition and all that stuff, and giving recommendations to the human to decide who to award. So it’s kind of the entire lifecycle is pretty mind-boggling.

Cole French:

That is pretty mind-boggling. When you mentioned the code, writing 90% of the code and that sort of 10% being the human element or the human piece of it, where does AI factor in when it comes to testing? Does the testing fall in the human side of the 10%, or can AI be leveraged on the testing side as well?

Nic Chaillan:

The AI usually is going to be capable of writing all the tests, whether it’s unit test or application test, and then it’s also able to update documentation. So for us, every night we have a cron job that goes update, all documentation according to all the changes made in code, and create all that stuff in code and complete seamless. So we don’t write any comments or any of that stuff anymore. The bot is going to be able to do that by itself with 100% accuracy, so.

Cole French:

Without going into all the technical details, I mean, how does that work? Is the bot reading through the code? How does the bot know based on reading through the code?

Nic Chaillan:

Yeah, the bot reads the code and updates the code, adds comments, and then it can create the right formatting based on the language, if it’s Python is going to write the right Pydocs, and then we use the tool to create the documentation automatically. So, it is pretty insane.

Cole French:

That is pretty insane. And knowing as a compliance professional and working in, I mean not just compliance, but in this cybersecurity space for a long time, I mean, the main thing, the main issue we run into constantly no matter what it is, it could be software development, it can be other areas of compliance, is documentation. Right?

Nic Chaillan:

Right.

Cole French:

We do X, Y, and Z, but we don’t have anything written down that states or describes how we do X, Y and Z. It’s a major game changer.

Nic Chaillan:

Yeah, policy and procedures, right? All the federal policy and procedures. I mean, we got these done in seven hours, 20,27 or something, policies between data split incident change, management policy, disaster recovery, incident response, you name it. Kind of a game changer to write all these and it’s completely tailored to the use case and the kind of hosting and tools you’re using. So it knows, oh, you’re on Azure or whatever, or Amazon or whatnot, and then it’s going to be able to customize the policy accordingly. It’s kind of insane.

Cole French:

So, absolutely, sounds insane. So talking a little bit more about that. So if I have a system and I want to leverage Ask Sage, in this case to generate all my documentation, does Ask Sage just take any inputs, as far as my system configuration is concerned? How do I build the model or teach Ask Sage so that it knows what to generate on behalf of my system?

Nic Chaillan:

Yeah, there’s different ways to do it. Depends on what you already have or not, right? And what you’re doing. When I did it for Ask Sage itself, we had nothing because we’re just getting started and so we had nothing. So what I did is I took the NIST 800-171 controls for CMMC, and I wrote implementation details, one or two line per control myself by hand.

Took me about seven hours to do it by hand, by myself. Enough details to be dangerous but not too much, just explaining, okay, we’re on Azure gov, we’re running on Kubernetes with the AKS and blah blah, blah. What kind of cyber tools we’re using, and all that stuff, how we do DevSecOps and kind of the lay of the land of the tools we’re using and processes and stuff like that. And then we use that to then train Ask Sage to then generate the NIST 800-53 controls, all 1,200 of them, even if we didn’t need it, we did all 1,200. We read those and see what was wrong, and then use that for the final training and then it could generate everything because at that point you have everything you need.

So that was pretty amazing. It took maybe two days of work. We have a pretty complex stack, but it’s not the most complex on the planet, but it’s not easy. And we’re able to do the full package, including the continuous monitoring side of the house, where we use Ask Sage to tap the Azure APIs and query our CV scanners and SBOM and Microsoft Defender and all that, to get the export to be able to send it to DoD and to FedRAMP. So, pretty cool.

Cole French:

So in a lot of ways what I’m hearing is, it’s really an end-to-end solution, right?

Nic Chaillan:

Yeah, there’s a few things it can’t really do well, like diagrams. It can do some diagrams, but not to the degree of what FedRAMP wants to see. So I guess there’s still a few gaps here and there on the diagram side. All my assets is done automatically though. So I mean, we’re containerized here, so if we don’t have any VMs, we run all on containers. We have pretty advanced cyber practices, we kill containers every four hours to go back to immutable state. We use chain guard container images, so we have pretty much no CDs, and we fix stuff within four hours and go back to immutable state and all the good stuff we do with zero trust and all that. So we’re pretty advanced in cyber, which helps, having all that baked-in cybersecurity instead of bolted on after the fact. So really, a game changer to have the right cyber practices.

But what’s exciting is we are working in the next couple of months on starting an AI marketplace, and it’s not just for AI product. It’s any product using some level of AI things, everybody nowadays. And so we’re going to be able to add companies and startups particularly, that are trying to get started into DoD and FedCiv and the defense initial base and so on, into all our existing tenants, whether it’s FedRAMP High, whether it’s a DoD IL5. We’re going to go secret, top secret. And then of course, we have dedicated tenants on a per customer basis in the Navy, Air Force, and Army and some contractors and so on. And so people will be able to partner with us to be added to those environments and then inherit all our cyber controls and we will use our product to do a certificate to field and update the ATO package accordingly for each new application we’re adding to the stack. And then use you guys to do all 3PAO pen tests and all that.

And so it’s going to be a pretty turnkey offering for companies that are trying to get started into the US government nightmare of paperweight, and be able to streamline it and get access to ATOs, but also get access to existing customer base and contract vehicles so they can start selling right away.

Cole French:

So one thing you talked about when you originally talked about Ask Sage and how Ask Sage was set up, and you started by writing the SSP against 800-171, very simplistically. So you mentioned describing the stack in a very simplistic way. So I’m assuming that based on that description, the model had information on all those different aspects of the stack and that’s where it pulled some of that additional information to enrich what you had already written. Is that an accurate description, or does it work a little bit different?

Nic Chaillan:

Yeah, absolutely. That’s the beauty of these models that turn on everything and they’ll just sense. So all these tools, all these cyber tools you may name or whether they’re open tools or commercial product from whatever cloud provider or whatever, is going to know about it. So you don’t have to explain what each of these products do, they already know. So you can just share that you’re using it and how you’re using it and whatnot, and then it’s going to be able to extrapolate and make it way more verbose. And I don’t have the patience, I don’t know how you guys do it, but when I see some of these controls being so redundant and annoying and quite honestly, often useless, I don’t have the patience to write it myself. So I’m pretty glad I have an AI to do it for me.

Cole French:

Absolutely, yeah. I mean, that’s one of the things in some of my dabbling and research into using AI on the compliance side is yeah, I mean, how can I use it to generate some of this boilerplate, filler type information that is kind of repetitive? Doesn’t really, in some cases doesn’t even necessarily speak to the specifics of something, but just speaks to the concept of it. Definitely useful and helpful to have something that can generate that kind of stuff because it’s kind of repetitive, redundant and yeah, I mean, in some cases even you could argue, useless.

Nic Chaillan:

Yeah, I mean, it is all these policies. There’s a lot of paper weight that people will probably read once and forget, right? But what’s exciting here is you can give it an example of a good one or whatever, and then it’s able to extrapolate and fix it according to your use case and how you’re doing things and how you host it and kind of your team size and the size of the company, whatnot. And so you can really save hours of writing all this stuff by just using GenAI.

And the way we built it is so amazing that the bot is actually writing its own prompt, so you don’t have to type anything. The way we built ATO In a Box for example, is the bot will create a table of content for these policies that don’t have a mandated template by the government, but you still need to have the policy. And so we have a table of content created first by the bot, based on the document template and what we want to have in there. And then iteratively, it’s going to write a prompt per section and then it’s going to respond to each section and write each section. And then it’s going to get you a Word document to download, ready to go with your template header and all that, and it’s good to go. It’s kind of amazing.

Cole French:

That is amazing. And I assume, so it’s coming up with those prompts, which are basically the different control requirements in a particular either policy framework, whatever it might be, and then the outputs of those prompts are the actual bot’s understanding of the system?

Nic Chaillan:

And then for policies also, like data spill and all that stuff, it’s going to say, “Okay, you’re using GitHub, you’re using GitHub actions for your CI/CD pipeline, you’re using this and that.” So it’s going to be able to create kind of a turnkey customized best practice incident response/data spill incident policy so that you don’t have to make it up. The bot’s going to tell you what you should be doing with the tools you have to get it done.

Cole French:

Yeah, that’s pretty incredible. There is one question that I have on this and it’s maybe I guess, a danger I see, or maybe the other side, we talked about the scary of it earlier. And maybe this isn’t really a scary thing. This is just kind of a, we have this mentality within compliance that is what we call the check the box mentality, where it’s like, okay, I got to do this. All right, check that box, check that box, and then I get my ATO. But then the question is begged at the end, did I actually enhance security? Am I actually doing security?

So, do you think that some of these capabilities of AI, where it can almost step in and do a lot of this stuff for us, do you see that as actually potentially furthering a check the box mentality, where I’m just looking for my AI tool to output all this for me and I don’t really understand my system and understand how it works? Or do you think it supports ongoing security and moving away from a check the box mentality?

Nic Chaillan:

Well, I think none of the ATO process really will really improve your security, if you’re just trying to get away with checking the box, and you can’t do that, and you can be compliant but not secure. But if you’re using GenAI as maybe a cyber expert, a chief of staff, and someone to guide you throughout your startup journey or whatever the case may be, and you reflect and bounce ideas. The bot, for example, if you give it code, we have a plugin on Ask Sage to be able to look for performance improvements and cyber security issues and commenting code and all that directly, automatically. I can tell you it’s able to find malicious code in nature that most static dynamic code analysis scanners would not find. Meaning, good code, but just malicious in nature. So it’s passing all the scans, but GenAI would find it to be malicious in nature.

So again, I think if you use it right and you know what to ask and how to use it, you can really get to real meaningful cyber outcomes, particularly when it comes to architecting your stack. And if you’re lucky to start fresh and have the ability to build it right from the get-go and ask the right questions to see how you’re going to architect all this stuff, and you bounce ideas with AI to tell you what they would recommend and say, “Hey, okay, give me five course of actions to achieve X, Y, and Z outcome, based on my tech stack.” That is a game changer. That’s how we make every decision here, whether it’s raising money, all the way to hiring people, and we always do it assisted with GenAI and augmented by GenAI. So that’s the only way to make those decisions in my opinion, because they really, if you ask it right and you bounce ideas, it’s going to give you a lot of insight that you may not be thinking about.

Cole French:

Yeah, that makes sense and I agree that good security is good security. Compliance is the degree to which it’s going to get you good security is good as you do it. But yeah, the check the box mentality in compliance, I agree. Doing security well and doing compliance well, not necessarily the same thing.

Nic Chaillan:

I mean, the simple fact that NIST 800-53 controls are really not designed to this day for zero trust, real zero trust, as I was part of the team that created it eight, nine years ago now. I can tell you that this is a joke. I mean, a lot of the stuff is actually pushing the wrong behavior, just like we do often in acquisition trying to prevent malicious outcomes. And because a couple of idiots decided to abuse of the taxpayer money and scam the government, all employees in the government trying to do the same and become bad apples. And now you have all these rules and policies effectively, probably costing the taxpayer 50 cents on the every dollar spent on useless compliance. Right? The remedy is worse than the problem.

And you see that also in cyber, right? And particularly when you start continuing to see controls centered around boundary controls, perimeter defense, and not zero trust, and teams still not understanding cloud native containers. When we explained to the team at FedRAMP that we don’t have VMs and we only had two CVEs for the entire stack, 25 containers, the first reaction was, well, you’re not scanning stuff right. When really, it wasn’t that. It was just we did a good job fixing 1,400 CVEs, but.

Cole French:

Yeah, I mean, you mentioned building security in. I mean, that makes me think of what you, something you said earlier.

Nic Chaillan:

Baked in, baked in is a key term, not bolted on. That’s right.

Cole French:

Exactly, so baked in. So I’m curious then, do you think, obviously any system, right? That’s just good practice, is to build security into and bake it into whatever product they’re building?

Nic Chaillan:

Yeah, I mean, they cut corners, right? They’re going to try to move fast, and honestly, they don’t really move faster. I would rather have a 10% tax on my speed from the get go for cyber and best practice in cyber, than having two year tech debt down the road six months from now because I didn’t do it right. So, I think it’s short sighted.

Cole French:

Yeah, and I think the tech debt thing too. I mean, yeah, the tech debt only grows because once you start with it and you’re already operationalized, and so now you have competing objectives, right? You want to keep going forward but you have these things that you didn’t take care of, it’s really hard to go back. But I’m curious, so in your view, would AI be helpful in trying to bolt security onto a system, or do you see that as a potential benefit and use case?

Nic Chaillan:

No, I mean, it’s huge. I think what you’re going to find, right, is the fact that at the end of the day, if you ask the right questions, the AI is going to proactively make it happen. And we created personas on Ask Sage to be able to have those kind of behavior baked in, so the developer for example, doesn’t have to ask about Nest or XSS injections, SQL injections for the bot to pay attention to it. It’s baked in into the prompt and so on the persona side, and so it’s already there. And if you do it that way, your development teams don’t have to think about it. And then if you use that in your CI/CD pipeline, the bot is going to self-reflect on the code and make improvements organically and by itself as well. So, it’s kind of a game changer too.

Cole French:

Yeah, no, I can definitely see how it’s a game changer on both sides if you build it from the ground up using and involving AI, it drastically speeds up the process and improves the accuracy. But then I can see how bolting it on, kind of the tech debt problem we mentioned, I can see how it can be used to help bridge that gap in a way that human interaction alone would have a very difficult time achieving.

Nic Chaillan:

And it also creates a second or third pair of eyes on code and looking at things in a different way too. So it’s even as an entrepreneur, that’s my 13th company, so it’s not my first rodeo but yet it gave me ideas or directions that I didn’t think of it myself. So, it is pretty cool.

Cole French:

Yeah, yeah, I can definitely see how it would bring about a different way of looking at something or it would bring about things that you just don’t see. And I think I mentioned earlier, that’s something that I’ve been looking into it to use is, how can I give it all kinds of information and have it condensed down for me? Because really, that’s the biggest thing, is information overload. Just too many things to go over, too many things to review. My brain just isn’t capable at a certain point. So anything that can review and condense and give me the most important things.

Nic Chaillan:

And that’s the learning curve of everybody getting started, but honestly, that’s like step maybe B or C of how far you can go. With agents nowadays, you can automate tasks, complex tasks and chain things and have either human assisted or completely automated steps. I mean, the fact that we train into different data sets, everything about Ask Sage for our company, so we can ask questions, and along the way keep training it up every big change we’re making so that you can continue to give us good advice. I mean, that is just, it becomes kind of an assistant on steroid that knows everything about everything. It’s just completely insanely powerful if you really do it right.

Cole French:

Yeah, no, I can definitely see that, and it’ll be interesting to see what directions this goes in. Although, I mean from this conversation and from even beforehand, I mean, I think it can go in just about any direction conceivable.

So as we kind of start to wrap up this conversation, just wanted to touch on, and you kind of talked about this actually at the beginning, kind of that learning curve. How does the learning curve factor into AI use and adoption? Can you use AI sort of without much knowledge? Can you use it pretty effectively, pretty broadly? Does it require a lot of expertise? What’s been your experience, in terms of the learning curve in AI?

Nic Chaillan:

Well, I think to get to the basics, like you were talking about like extracting and summarization and all these basic things, I think that’s fine, anyone can do it pretty quickly, right?

But when it comes to getting to what we’re doing, I think that’s where it gets a little bit more tricky. And we have a lot of training collaterals we have on our website and on YouTube for free. You can just go on Asksage.ai and create an account for free, and get the videos and all that, about 12 hours of video content to learn. So I recommend people to watch it. And it’s free, so why not? And then of course, there’s a bunch of courses online and different things, but it’s probably going to become the biggest skill you can ever get, at least for the next 10, 15 years. And so I think that the key steps are understanding the different stages of growth, getting started maybe with some basic comp engineering on basic questions like extraction, summarization, all that stuff, translation, coding. Then starting to add data and documents with some rag and embeddings and like we do on Ask Sage with data sets. And that’s super easy to ingest, PDF and Excel and whatnot into it.

And then next step is connecting to databases to get real-time data and not just [inaudible] time files, right? So that’s next level. And then you go to the next level, which is creating, consuming plugins that we already have built to automate tasks that already exist. And then the next step is to build your own plugins, to automate your own tasks and use our plugin builder or whatever to create your own plugins, or use the API to automate tasks as well. So all that is kind of the journey.

It can take anywhere, if you do it and you watch the videos and you try hard and don’t give up, I would say within three months you should get to a pretty decent velocity increase, probably 10 to 20X right there already. So, kind of insane.

Cole French:

That is insane. Is that developer level, or is that just everyday use?

Nic Chaillan:

Yeah, it’s probably advanced everyday use for pretty much every job there is that’s not blue collar jobs, I guess. But yeah, it’s three months you’re going to see real results, and then six months you probably can become an expert at that point.

Cole French:

So Nic, you mentioned some training videos that folks could go check out. Could you just mention again where they might be able to find that? And of course, we’ll link to it in the show notes as well.

Nic Chaillan:

Yeah, so if they go on chat.asksage.ai and create an account, there’s going to be a pop-up showing up when you register with a welcome pop-up that has all our videos and all the links there, and you’ll find videos for every use case.

What’s interesting is, you should watch all of them. So for example, there is a human resource grading resume video. You may think, oh, I don’t care, I’m not in human resource and I don’t care about that. But what we’re teaching is the process of vetting or grading things. That can be applied to cyber, that can be applied to acquisition, that can be applied to anything. So it’s good to watch them all and just learn how to think about step-by-step processes, instead of doing it in a vacuum. We always try to find real life examples to make it diversified for everybody, but they should all be watched, pretty much.

Cole French:

Yeah, I agree with that. I think that’s a good point that you just made that, and one of the challenges I’ve actually had myself in dabbling with AI is, I think we tend to look at it as, okay, I want it to just be this tool and this thing that solves a problem for me. And I mean, it can definitely be that at some point, but we still have to take a step back and remember how we think about some of these things. And so I like that you pointed out, really thinking about a particular thing is very important in taking that next step into, how can I leverage AI as a solution to potentially solve this thing? It’s typically a thought problem, it’s not necessarily a task. I mean, it might end up being a set of tasks, but it starts as a thought problem. So I think that’s something we have to keep in mind before jumping right into, “oh, I want to plug this thing in and use it.” Really think about it.

Nic Chaillan:

Yeah, what’s interesting with o1, the new OpenAI model is, it’s exactly what the new model does by itself. So it’s going to self-reflect and think about decoupling every steps and defining every step that needs to be achieved to get to the outcome desired. So, it’s pretty cool.

Cole French:

That is pretty cool. So even some of the thinking parts of it sounds like we may not have to do-

Nic Chaillan:

Less or less, yes.

Cole French:

... less at some point either, which is, I guess when you said at the beginning, it is kind of scary. That is, that is kind of scary.

Nic Chaillan:

Yeah, that’s a big difference between the GPT 4o and the o1 model, is thinking and self-organizing and reflecting on itself and kind of asking itself questions to get to outcomes, and do the whole phase-by-phase, step-by-step process by itself.

Cole French:

That is pretty incredible to think about something that is not human being able to self-reflect. I think that’s always been, at least the way I’ve looked at it is, it’s never really scared me from the sense of it’s going to take everybody, it’s going to take all these jobs and things like that. But if we do get into a situation where it’s self-reflecting and capable of sort of that human level of thought, that to this point computers have not been capable of, that will be really interesting to see where things can go from there.

Nic Chaillan:

Yeah, I mean, it’s quite better than most PhDs today is in their fields. And so if you give a piece of code and the code has a bug or it’s not working when you’re trying to run, and you give it the error messages, it can fix its own mistake and fix its own code, I guess.

Cole French:

Yeah, that’s pretty incredible. Well, just before we close things up here, Nic, just wanted to give you one last opportunity here to give us a summary, I guess, of the benefits, the positives of AI in your view, and then some of the challenges that you see.

Nic Chaillan:

Yeah. Well, I think number one is, it’s probably the most impactful technology that I’ve seen in my entire career, and I’ve been doing this for 25 years, so suddenly, pretty insane. I was the Chief Software Officer for the Air Force and Space Force with 100,000 people and 60 billion of funding spent in software a year. And clearly, I can tell you this is going to be a game changer for every organization, big or small.

China is not waiting for us, by the way. They already have their Beidou GPT deployed across the Chinese government, across classification level. So it’s pretty scary as well. You’re going to see offensive and you already see it just when you post any kind of job application, you get hundreds. I think I got 1,000 executive assistant applications on my EA job post, probably 90 plus percent of which were written by GenAI. We even see technical candidates now try to use GenAI during interviews to respond to questions they don’t know answers about, which is good and bad. At least you should disclose it, don’t try to cheat the system.

And like in everything in life, there’s ethics, and I’m not the thought police or the First Amendment police, I don’t get into that, but everybody has to think about what they feel comfortable with and pay attention to that kind of stuff. But at the end of the day, that’s the big stuff and my take is, you don’t want to be left behind and you want to be ahead of the curve. And you want to certainly be embracing it and not be the ones missing the boat on this. And if you do, it’s going to be very tough to find what’s going to be next once a big chunk of jobs are getting replaced by technology.

Cole French:

Yeah, I think that’s a good perspective that really, you can choose to put your head in the sand, as you said at the beginning, or you can jump on this and be part of whatever it is that develops out of this, right? If we want to see it go in a certain direction, then we need to be involved in it. And your point about China using AI, that could be a potential follow-on conversation I think to this, is some of the more, I don’t know, I don’t know what the word I’m looking for is, but some of the maybe scarier applications within the cyber world of AI that reach beyond what we’ve talked about today. We’ve kind of just touched on it briefly, and really as it relates to compliance.

But Nic, I really appreciate you taking the time to talk about AI with us. I think our listeners will find this conversation incredibly valuable. I know I did. I learned a lot from this conversation and I really appreciate your perspective, and I hope we can do it again sometime soon.

Nic Chaillan:

Anytime. Thank you so much for having me, it was a lot of fun.

Cole French:

Thank you for joining us on the Cyber Compliance and Beyond podcast. We want to hear from you. What unanswered questions would you like us to tackle? Is there a topic you’d like us to discuss, or you just have some feedback for us? Let us know on LinkedIn and Twitter, at Kratos Defense or by email at ccbeyond@kratosdefense.com. We hope you’ll join us again for our next episode and until then, keep building security into the fabric of what you do.

Have a topic you’d like to discuss?
Use our contact form to send us a message.
Get updates from Cyber Compliance & Beyond
Sign-up to receive email alerts when podcasts are available.