The UK government just released a report claiming that AI could actually make it easier for threat actors to carry out cyber and terrorist attacks. It’s a pretty scary thought, but according to Avivah Litan, the VP and Distinguished Analyst at Gartner, unless there’s some international governance in place, there’s not much we can do to prevent the malicious use of AI. She said, “There are existential risks where AI stops taking instructions from human beings and it starts doing what it wants, and we become dispensable, so there is no guarantee we will survive because we would just be a puppet. And that’s the fear of it all.”
You see, AI has been becoming more and more accessible over the years. We’ve got all kinds of generative AI software out there, like ChatGPT, that people are using to be more productive at work. But with this increasing availability, there’s also an increasing risk of misuse. Avivah explains that one of the biggest risks we face is misinformation and disinformation. It’s crazy easy to create fake information with AI, especially when it comes to generating text. And that can lead to some seriously bad decision-making and societal polarization.
But it’s not just about misinformation. AI could also be used by individuals without any training or experience to carry out sophisticated cyber attacks. And terrorist groups? Yeah, they could use AI to enhance their propaganda efforts, recruit new members, and plan attacks. It’s a whole other level of danger.
That’s why regulation and governance of AI is so important. In fact, earlier this year, over 30,000 people, including big names in the tech industry like Elon Musk and Steve Wozniak, signed a petition calling for a hold on any AI systems more powerful than GPT-4. Even the companies developing these powerful AI models understand the need for regulation and have voluntarily committed to guidelines from the Biden administration.
But here’s the thing: getting global governance on AI is really tough. There’s a lot of cooperation that needs to happen, but some nations, like China, might not be so willing to participate in a meaningful way. It’s a challenging situation, to say the least.
On top of that, there’s concern that too much regulation could harm competition. Businesses are worried about losing their competitive edge if strict regulations are put in place. So, finding the right balance is key.
But here’s the good news: AI also has the potential to address some major challenges. When it comes to cybersecurity, there’s a persistent skills gap that leaves businesses vulnerable to attacks. But with the help of AI, we can re-skill employees and mitigate those threats. We just need to make sure we’re focusing on upskilling and regulation simultaneously.
And let’s not forget about the opportunities AI presents. In healthcare, for example, AI could revolutionize the way we treat diseases and develop personalized medicine. It could speed up the process and provide targeted solutions. That’s a game-changer right there.
Look, there’s no denying that AI has some serious risks. Existential risks, even. But we can’t let those risks paralyze us. We need to keep exploring and implementing controls to manage those risks. It’s a process, but it’s doable. And if we do it right, the future of AI could be a world of possibilities.