Yo, check it out. Cyberattacks are blowin’ up like crazy, man. I’m talkin’ about a computer gettin’ hit with 2,000 attacks a day. That’s some serious shit. And you know what’s gotta step up to the plate to handle all these breaches? Artificial Intelligence, baby. But here’s the thing, it seems like the AI developers and researchers are laggin’ behind. We got way more AI developers than we do people workin’ on the safety of it. And get this, cybersecurity solutions focused on AI only started takin’ off after 2016.
Take a look at Figure 1, man. It shows how many research papers were puttin’ out the spotlight on new AI solutions for cybersecurity each year. You can see the numbers startin’ to rise in 2016. It’s like we finally woke up and realized we gotta do somethin’ about this shit.
Now, with governments feelin’ the heat to deploy and regulate AI, we need policymakers to understand what kinda AI we got for cybersecurity. Like my man Max Smeets said, this ain’t about whether humans or technology will be more important in the future. It’s about how AI can help all the folks in cyber organizations do a better job. These policymakers need some solid academic advice to navigate the political and legal complexities of this crazy technology.
So, we decided to give ’em a foundational overview of how AI is used for cybersecurity, man. We checked out all the publicly available AI algorithms, 700 of ’em to be exact. Then we used the NIST framework to see what cybersecurity purposes these AI algorithms were used for. We’re talkin’ about identifyin’, protectin’, detectin’, respondin’, and recoverin’, man. Five main purposes.
Now, some folks been sayin’ that China is leadin’ the charge in developin’ AI for cybersecurity. But hold up, when we dig a little deeper, we find out it’s actually the United States that’s takin’ the lead. Most of the authors behind these AI solutions are affiliated with institutions in the United States or the European Union. Look at Figure 2, it shows you the geographic distribution of these research papers.
Now let’s break it down even more. Out of those 700 unique AI algorithms, we found that 47 percent of ’em focus on detectin’ anomalies and cybersecurity incidents. Protectin’ comes in second at 26 percent, followed by identifyin’ at 19 percent, and respondin’ at 8 percent. It’s interestin’ ’cause we’re seein’ this shift towards what they call the Cyber Persistent Engagement Strategy. That’s like how countries are changin’ their cybersecurity strategies, man.
Check out Figure 3, it shows you the distribution of these AI solutions based on their cybersecurity purpose.
Now, there’s a few things to consider here. The NIST framework doesn’t really align with this thing called Cyber Persistence Theory, ’cause it doesn’t include anticipation. Anticipation is a big part of Cyber Persistence Theory, man. It’s all about stayin’ ahead of the game and anticipatin’ the exploitation of vulnerabilities. Another thing is that states are risk-averse, man. They don’t wanna rely solely on machines to respond to cyber incidents. It’s like with self-drivin’ cars, the liability is too high. And on top of all that, we need a shit ton of training data to get these AI models up and runnin’ in the cybersecurity space. But get this, the right kinda data to prepare a response AI algorithm is hard to come by.
Now let’s talk about the nature of these AI algorithms. Almost two-thirds of ’em, that’s 64 percent, use learnin’ methods. Communication methods come in at 16 percent, followed by reasonin’ at 6 percent, plannin’ at 5 percent, and a mix of different methods at 7 percent. Figure 4 breaks it down for ya.
When we put together the purpose and the nature of these AI solutions, we see that machine learnin’ algorithms are the big players in all five cybersecurity functions. They’re helpin’ with intrusion and anomaly detection, blockin’ malicious domains, preventin’ data leakage, defendin’ against malware, and analyzin’ logs. These AI algorithms are holdin’ it down, man.
Now, here’s the thing about cyberspace and AI. They’re both part of this whole great power politics game. And as they keep evolvin’, regulation might unintentionally go overboard. Lawmakers can’t predict where things are headed with this rapid development. Too much regulation could slow down the progress of digital technologies. So, we gotta find a balance, man. We gotta minimize the regulation of AI in cyberspace to give the United States an edge in global power politics. But at the same time, we gotta protect our democratic values and civil liberties. Transparency guidelines and public oversight should be put in place to make sure AI is used ethically and responsibly in cybersecurity.