Weaponizing AI & How Government Can Defend

Tyler Podcast Episode 79, Transcript

Our Tyler Technologies podcast explores a wide range of complex, timely, and important issues facing communities and the public sector. Expect approachable tech talk mixed with insights from subject matter experts and a bit of fun. Host and Corporate Marketing Manager Beth Amann – and other guest hosts – highlights the people, places, and technology making a difference. Give the podcast a listen today and subscribe.

Episode Summary

We have entered an era of unknown outcomes where A.I. has become accessible to virtually anyone… including hackers. Cybersecurity expert and renowned ethical hacker Erica Burgess discusses how A.I. is being used by cybercriminals to drive and launch attacks and how cybersecurity is utilizing A.I. to defend agA.I.nst it in this podcast. We take an in-depth look into how A.I. is being used to attack networks, the identification of socially driven A.I. attacks, defending agA.I.nst A.I. driven attacks and how to use A.I. systems safely.

Transcript

Erica Burgess: You can actually argue that an attacker kind of extends the functionality of a system for better or for worse. Right? So with the open ended nature of A.I., it's no different when we start to look at the future. You know, A.I. and hacking have been a great match, and that's only going to continue. So we're going to see more creative attacks and more frequent technical attacks due to the generative nature and offensive uses of A.I.

Beth Amann: From Tyler Technologies, it's the Tyler Tech podcast where we talk about issues facing communities today and how highlight the people, places, and technology making a difference. I'm Beth Amann. I'm the corporate marketing manager here at Tyler, and I appreciate you joining me for another episode of the Tyler podcast.

Today, we're joined by Erica Burgess, a cybersecurity expert and renowned ethical hacker. We'll take an in-depth look into how A.I is being used to attack networks, the identification of socially driven A.I. attacks, depending agA.I.nst A.I. driven attacks, and how to use A.I. systems safely.

Erica, welcome to the podcast.

Erica Burgess: Hi. Thank you for having me.

Beth Amann: Well, we are so lucky to have you back again. You were previously on the podcast in February talking about ethical hacking, and we will definitely link that in the show notes so people can go back and hear an in-depth explanation of what that was. But for those who have not listened or for those who need a reminder, I'm hoping you can remind us quickly about what ethical hacking is.

Erica Burgess: Sure. An ethical hacker is someone who does all the same hacking techniques as a regular criminal, but it's to test a system. So penetration testing essentially. And we like to say the big difference between us, Oxford, the ethical hackers, and criminals is that we write reports about what we find in criminals don't. Most importantly, we have permission to do it, and we follow a very careful legally defined scope. So for example, I may compromise a server, but I stop there to show that I gA.I.ned access, whereas an attacker might install malware or explore the rest of the network or pull down sensitive data.

So, we stop short of just what an actual attacker would do. So examples of ethical hackers would be roles such as bug bounty hunters, penetration testers, red teamers, security researchers, and I've actually done all of those roles. And so in order to emulate what a real attacker does, us, the ethical hackers, we stay ahead of new research for exploits that come out every single day. And lately, that's included a lot of A.I.

But if you're curious for more, they have that link in the description as well about the previous podcast.

Beth Amann: Yeah. That is so neat to think about. I loved the description of we write reports as ethical hackers versus non ethical hackers or unethical hackers, just thinking of you're doing all the same things and have the same skills, but you're doing it for good to help companies understand the vulnerabilities in their system.

I think it's really an interesting side of technology that I've never interfaced with and find it fascinating. So I definitely encourage folks to go and listen to that episode.

But we are here, you mentioned it, to talk about A.I. And so, first, I wanna dive in a little bit to the government specific side of things. We know that governments need to be more alert than private sector organizations.

It seems that targeting a public sector company would involve much higher stakes because of the critical systems that they supply to our residents. So can you speak a little bit about the risk for governments versus the risk to private sector businesses when it comes to cyber attacks and the differences and the stakes for each.

Erica Burgess: Yeah. You nA.I.led it.

I mean, public sector has always been a big attack surface for hackers You know, attackers know that town halls and other public infrastructure may not have the resources to respond to a cyber attack or to keep and keep systems up to date. So people ask me, well, why would they attack a system that just keeps our deeds and voting records? You know, that's just public information anyway. What does it matter?

Well, consider that they can install ransomware if they control that server even if there isn't sensitive information on it. So no matter what your system does day to day, they can halt business processes. Like you mentioned, those are usually important processes.

So even more directly, if your work involves printing checks or anything related to payment systems as part of a public sector, you're just providing motivation for them to attack. And access that functionality for fraud or other financially motivated attacks. And, of course, the last single that I like to point out is - there's always a political or ideological angle to some of these attacks. So if your government has a public facing website as most do or your school has a public facing website, there's a risk of defacement so that they can put their message up publicly.

And now those are reasons to attack but now the big motivator might be why not attack because A.I. is making it easier than ever. And so attackers are always looking for ways to practice a new technique. So the effort is significantly reduced with A.I., and they can attack more frequently and with a better, more efficient strategy.

Beth Amann: That is crazy to think about, especially the phrase you sA.I.d, they want to practice their techniques. Yeah. And so if you're practicing on a town's website, a state's website, you can make a much broader impact and really see how intensely you can take down an organization and that's really scary to think about. How are cyber criminals weaponizing A.I.?

Erica Burgess: So, there's a lot of different ways that I've been using A.I. as an offensive security researcher, and I have to assume that everything I've been doing with it, an actual cyber criminal is doing out there, especially since most of these are open tools that are avA.I.lable on the internet. If not for free, very, for very little money. So hackers have been using A.I. for years long before chat A.I. In fact, you know, I've reported exploits that were only possible to do with A.I.

So when I trA.I.n ethical hackers, I show them how to use A.I. systems safely. So we can continue to emulate what real attackers are doing. So, for example, a very old example from the eighties is A.I. vision systems. And you can use these to bypass captcha puzzles, say, on a login page when you're prompted to say, what's in this image or which image matches which?

That's a captcha. So that actors have gA.I.ned a very specific advantage in the last two years due to chat A.I. as well. So one interesting example is access to exploits.

People like me who publish exploit material have a responsibility to balance education and awareness with safety. So we're making people aware of these things, but we're also trying to keep the secret sauce hidden so to speak so that, you know, anyone can't just run these things. So in the past, I've actually published security research and exploits and done what many other hackers have done, which is to include a bug in the code on purpose. So it's not easy to just run these attacks. So not just anyone can run it.

And that's intentional. The idea is that someone who can't fix a simple bug or coding error certA.I.nly doesn't know enough to be running a sophisticated attack that can compromise a server and take control over something. So even an inexperienced hacker who doesn't know anything about coding, can ask one of these chat systems to fix the coding error or even generate new code so they can run the dangerous code. And consider that Also, because this is generative A.I., it's capable of making new material, new code, new exploits.

So previously unknown attacks, also known as zero days, can be created with these things. And I've done this myself as part of security research, and it's fascinating, but it's also very humbling to know that a system is capable of using, you know, previously known attacks to generate new ones like I do in my work. So it's creative solutions in seconds instead of days. And I've found the combination of human attacker and A.I. to be the most powerful since a human can guide the hacking process with their own intuition and imagination, but also get that benefit of brA.I.nstorming, processing, generating code very quickly with a generative A.I.

So this combination is potent, and I've seen it do incredible things in my own work in terms of security assessments. And I expect to see even more sophisticated attacks becoming more common because of it, especially the more commonly known things like deep fakes and generated images as well.

Beth Amann: Yeah. I'm finding it very hard not to react with just like bright, wide eyes, and, oh, my gosh, that's insane to think about.

Because even just a few of the things you mentioned about, you put a bug in the code on purpose to see if you can't figure this out, then it's and you're not gonna be able to replicate it when you're trying to rA.I.se awareness about potential cyber attacks but generative A.I. or some type of chat A.I. could really just fix that immediately and go forth. And that provides a huge scale for cyber attacks. That you could then weaponize.

But if I'm the individual who's experiencing a cyber attack, it's important to kinda know what you're up agA.I.nst how would I know if A.I. was involved in that attack? And are there ways to know if it's what you sA.I.d that combination of A.I. in human or just A.I. or just human if you're going to be able to defend agA.I.nst it.

Erica Burgess: It's very difficult for the average user to identify an A.I. social engineering attack, let alone what I was talking about before with an A.I. network or application attack, one that wouldn't require any human interaction to exploit. So I usually think of it as those two camps, which is a social engineering attacks and those application attacks.

Both are bolstered by A.I., but let's start with the social engineering side. So voice and writing impersonation, such as video, deep fakes, have all become very sophisticated. Sometimes they're used for extortion and other types of social engineering. As for software based attacks, it may only be obvious if an A.I. is used during the attack if the attacker themselves doesn't have a grasp of what they're doing, but used an A.I. to do it.

But that's only determined if you actually have caught the attacker. So it's very hard to determine that ahead of time or during the incident response of that attack. However, every system does have a weakness, and that includes generative A.I. systems. It just usually takes an expert to determine those weaknesses to determine if it's a human or an A.I.

And in my experience, it usually has to be live. It's not easy even with other A.I. systems to detect A.I. systems. They only have a success rate of, like, sixty percent. So we have something called a Turing test that you can do live.

So back in the 1960s, way before we had the computational power to actually run modern day A.I., Alan Turing, a computer scientist, was already imagining the turing test, which is a way to determine if you're talking to an A.I. or a human by sort of blindly asking both the human and an A.I. questions and then comparing the results.

So nowadays, that test is actually almost obsolete because current systems routinely create content and conversation that really passes as human. And as part of my weaponized A.I. talk next month, I'm actually gonna discuss how weaknesses in each type of A.I. system can be used to make Turing tests for the future. But as with everything related to A.I., the testing has to be reevaluated almost monthly to be flexible and creative enough to keep pace with a very flexible and creative technology like A.I.

So, as far as what an average user can do, since you'd have to stop short of actually identifying it as an A.I. attack, you'd have to respond the way that you would with any social engineering attack. So I always tell clients to confirm any suspicious calls or messages by reaching out to the person on a different platform that you were reached out on. So, for example, if you received a voicemA.I.l with a very convincing voice. You may wanna emA.I.l that person or find some other way to meet them maybe in person in order to confirm your suspicions.

Beth Amann: We'll be right back to our conversation.

Jade Champion: I hope you enjoy listening to this episode of the Tyler Tech Podcast. My name is Jade Champion, and I'm back with Dani McArthur to see what's happening this week across government associations.

This week, we're chatting about NACo, the National Association of Counties.

Dani, what's going on in the world of NACo? We have a webinar coming up with them. Right?

Dani McArthur: Hi, Jade.

Yes. Tyler Technologies and NACo are hosting a webinar during cybersecurity month. About how cyber attackers move through a network. On Tuesday, October 17th, a live webinar with our cyber team will dive into the mind of a hacker alongside our county audience to give a robust understanding of the cyber attack mindset and best practices to protect critical county data.

Jade Champion: That sounds so interesting. How do our listeners register for this webinar?

Dani McArthur: Everyone listening can head over to naco.org to register, or you can check out the link in our show notes.

Jade Champion: Great. Thanks for that info, Dani. If you are unable to join the live webinar on October 17th, you can view the on demand recording that will have posted on our website. Check out our show notes for more information. Now let's get back to the Tyler Tech podcast.

Beth Amann: I'm thinking about some interesting phishing attempts I've gotten recently where they it's been a text message that says “Hi. This is Lynn Moore, the CEO of Tyler Technologies. I'm locked out of my email. Can you help me?” And think there's no way in which that's happening?

But what you're describing are more likely scenarios that you could be attacked with, someone who would potentially me and say, oh, I've been locked out of my account, or, hey, I can't make it to this meeting, can you send me the note separately? That there are new and intense ways that need to be reevaluated constantly to avoid potentially damaging attacks. So if we're thinking about the ways to avoid these things or to protect our networks and be proactive, let's move away from what can be the scary side of things and think about how we can use A.I. to better equip governments to stand up agA.I.nst cyber attacks.

So can you talk about how A.I. can instead of being used as a tool for attack can be used as a tool to fortify and protect our networks?

Erica Burgess: Yes. We can. We can defend. In fact, we can use A.I. to defend just like attackers and pen testers use A.I. to attack. So A.I. helps defenders stay flexible when hackers use new attacks in a way that traditional systems can't. So I'll give you an example.

The old way of building a defensive system was to search traffic for specific dangerous keywords in logs. So there's a concept of blacklisting, which just means disallowing certA.I.n commands or texts in that log to alert on the, suspicious activity.

However, security researchers like myself bypass these sort of filters all the time. You know, sometimes it's as simple as using a rare SQL command for a SQL injection that the developer just didn't think to blacklist or encoding the text in a strange way. There's just so many ways to do this. So if I think about this as a defender, I take that into consideration.

And A.I. anomaly detection system can be used to answer very broad defense questions, like have there been any statistically anomalous items in any of the fields in this log. Just a very broad open ended question that A.I. is very good at versus the old way of saying, does a specific field in the specific log contA.I.n a specific word? And the A.I. can actually cross reference all of this data for us. So it helps us to find attacking attempts in general instead of just looking for what we know of since attackers have to think outside the box in order to succeed most of the time and bypass those filters.

So, I know from personal experience that hacking is a very creative process sometimes. So, using traditional programming techniques don't really work that well agA.I.nst new attacks or even just lesser known published techniques. So security isn't very black and white, and we can't really predict the future, but we can use tools that are good at getting a sense of what's going on instead of specific tests, if that makes sense?

Beth Amann: It does. It sounds like there's a scale to all of this that we wouldn't be able to achieve as individuals or smaller organizations. And so while A.I. can achieve these larger scale attacks and go through several different options because, like you sA.I.d, it's creative field, we can then also benefit from that scale instead of having to say, okay, I'm Erica, and I'm gonna sit down, and I'm gonna work on fortifying this network for the next hour if you use some type of A.I. that might multiply the impact that you'd be able to have. Right?

Erica Burgess: Exactly. And it's not even just in terms of brute forcing or scale, which traditional programming has been able to do very large scale attacks for a long time now. It's the intensely subtle attacks that these systems are able to generate, and at such a speed that it really outpaces the human attackers. So what we're finding is that the best attackers or pen testers are using a combination of A.I. and traditional techniques like scripting and traditional tools in order to succeed.

Beth Amann: Yeah. It's definitely something that is a huge topic right now. It seems new. It seems like the shiny thing to talk about.

We've talked about it pretty much in every episode for the past couple of months here on the Tyler Tech podcast, but A.I. has really been around for seventy years. You mentioned Alan Turing. We know that A.I. has been used in many different facets across technology for most of the twentieth century, but let's think about the future and let's think about the next five years because we know that the computing power has increased dramatically. And so a lot of developments will probably happen in the next five years compared to the seventy years, but what does that look like for A.I. and how will it impact society and cybersecurity?

Erica Burgess: Well, it's definitely a disruptive technology for sure. There's a lot of hype around this subject, but I do think it deserves it most of the time because what we're seeing is this huge disruption. This is an arena where hackers can excel, you know, everything we as hackers build or use is a disruptive technology. If you think about it, you know, we use our systems in new and different ways every day to effectively disrupt technology.

And you can actually argue that an attacker kind of extends the functionality of a system for better or for worse. Right? So with the open ended nature of A.I., it's no different when we start to look at the future. You know, A.I. and hacking have been a great match, and that's only going to continue.

You can actually argue that an attacker kind of extends the functionality of a system for better or for worse. So with the open ended nature of A.I., it's no different when we start to look at the future. A.I. and hacking have been a great match, and that's only going to continue. So we're going to see more creative attacks and more frequent technical attacks due to the generative nature and offensive uses of A.I.

Erica Burgess

Cybersecurity Technical Lead, Tyler Technologies

So we're going to see more creative attacks and more frequent technical attacks due to the generative nature and offensive uses of A.I. And we'll also see a world where it's really difficult to determine if content is generated by an A.I. or not. And then, of course, contrary to that on the defense side, we're also going to see incredible improvements in that arena as well, also because of A.I. So, for instance, we can look at predictive gap analysis and the ability to crunch massive intel in mass and changes to how we do responsible disclosure and bug reporting and many other changes that are all because of A.I.

So it's good to remember that, you know, any powerful tool can eventually become a weapon which is why it's so important to use it safely, but also an important thing to consider is we have to imagine all the positive use cases for this technology while keeping the harmful ones in mind as well. So that's the same story for any revolutionary technology, really. You could think of A.I.rcraft like helicopters at how they're not inherently good or bad. Like, sure they're used for war, but they're also used for putting out wildfires and hospital trips.

So I think of it like that. And I'm showing ethical hackers how to use these tools wisely, and the same goes for our defenders. You know, like I've sA.I.d, it will outperform whoever isn't using it. Right?

So, in this cat and mouse game, in this arms race of cybersecurity where it's always the attackers versus the defenders, bypassing the bypasses and on and on, it's my hope that with our work in both the offense and defensive fear, will create a net positive effect. Even though this future seems really chaotic, I just tell people to remember, like, if you wanna protect the future, you have to embrace it and think about these tools.

Beth Amann: Yeah. I think that's so important.

The march of technology will continue whether we wanted to or not. And so you have to be prepared to be thinking about what is the next innovation? How can I use it to protect my network? How can someone else use it to break into my network?

How can someone else use it to attack my systems? And it's so important to be prepared and not afrA.I.d of using these things. And I think it's so important that people like you are doing this ethical hacking and you've mentioned, like, folks that you're trA.I.ning as well. So you're sharing this with other folks and within our Tyler Technology Systems. Right?

Erica Burgess: Of course. And I do a lot of trA.I.ning outside of work as well. So different hacking groups, we have DC207, which is, related to DefCon, which is the world's largest hacking conference, and We actually continue that every month all year because we have to keep up with this stuff.

You know, it's just all happening very quickly. And I've also done volunteer lectures at schools where, you know, teachers may be fighting some of this stuff, trying to deal with cheating and plagiarism.

As I've mentioned before, it's very difficult to tell if something was generated by an A.I., and the same goes for school work. So I try to either inspire ways to check that plagiarism, but also ways to incorporate it into the curriculum because it's be it's actually becoming its own skill set in order to use these things. And so some of the best examples I've seen of incorporating this into any kind of trA.I.ning, whether it's, you know, scholastic or at work is sort of sharing the prompt that was used in a chat A.I. in order to show how you came to your solution. So sort of like showing your work, and that actually helps trA.I.ning everybody.

Beth Amann: That's fascinating. I actually just finished my MBA and I took a course in A.I. and one of the things that they brought up was let's walk through the prompts like you just described. Like, we're gonna talk about how the A.I. got this, what data it was trA.I.ned on, and how it came to the answer. And let's think about how our students can use this to benefit them versus how can our students use this to cheat their way through.

So it was a nice behind the curtA.I.ns look. I'm glad you're doing that with students as well and with educators.

Well, Erica, thank you so much for joining me to talk about A.I. and cybersecurity.

It is exciting and comforting to know that there is so much opportunity and potential to protect our critical systems with A.I. and to stand up agA.I.nst emerging cyber threats with the same tools that they are trying to weaponize.

Erica Burgess: Of course. Yes. I hope this isn't too scary. I mean, I think it's a kind of chaotic future. But it's very important to think about. So it's definitely fascinating.

Beth Amann: It is scary, I think, if we make it scary. That's something I have to keep telling myself. I'm sure you interface with this every day, and so you are very aware and well equipped to manage this. And for those of us who maybe are thinking about this in the most hyperbolic way, it's important to hear conversations like this of here are all the tests that are happening.

Here is all the information we're gathering. And here's how we're arming our technology specialist, but even just hearing you say our educators, I thought, wow, that's fascinating to think about how this is something that can be democratized and this is something that all of us can become our own version of experts in.

Erica Burgess: It's true and it's so accessible and that's a double edged sword, right, is is accessible to these attackers, but knowing that it's accessible to everyone I mean, imagine the coding language of the future I've heard somewhere is just human language like English, for example, you're able to say an English prompt to this thing and you get code.

I mean, that's huge. It's incredible how we've democratized this technology. And given how powerful it is, I'm really glad that we can have conversations like this.

Beth Amann: Yeah. Erica, well, it's cybersecurity month when this podcast we publish. So there is lots of content. I encourage our listener take a look into the show notes and see some of the resources that we shared. Erica, thank you agA.I.n for joining us.

Erica Burgess: Thank you.

Beth Amann: A.I. is a big topic these days, and there's a lot to be cognizant of, especially when it comes to protecting your critical systems. I hope you enjoyed conversation and found hope that you can protect your communities with digital cybersecurity solutions.

Tyler is here to partner with you as you navigate a digital landscape that can seem overwhelming. So please check out our show notes for more information about our cyber offerings. For Tyler Technologies, I'm Beth Amann. Thanks for joining the Tyler Tech podcast.

We're looking to learn more about you and what you want to hear more of on the Tyler Tech podcast.

Fill out our audience survey in the show notes today to let us know how you heard of the show and what you want more of. And don't forget to rate and review the show wherever you get your podcast

Related Content