(Reuters) – Man, let me tell you, this AI stuff is getting out of control, man. These advances in artificial intelligence, like that OpenAI’s ChatGPT backed by Microsoft, are just making it harder for governments to agree on laws to regulate this technology, man.
So, here’s what’s happening, dude. Let’s take a look at the latest steps that governments around the world are taking to try and regulate these AI tools, alright?
AUSTRALIA, man. They’re planning some regulations. They want search engines to draft new codes to stop the sharing of child sexual abuse material created by AI, as well as those deepfake versions of the same stuff. I mean, that’s messed up, and they’re trying to put a stop to it. Good on ’em.
Now, let’s talk about BRITAIN, my friends. At the first global AI Safety Summit, they got more than 25 countries, including the US, China, and the EU, to sign this thing called the “Bletchley Declaration.” They’re all about working together and establishing a common approach to overseeing AI. That’s a big step, man. But Britain’s not stopping there, they’re ramping up their funding for AI research and setting up an AI safety institute. They’re serious about figuring out all the risks involved, from bias to misinformation to the most extreme stuff, man.
Then we got CHINA, dude. They’ve implemented some temporary regulations, and they’re ready to collaborate with other countries on AI safety. They even published proposed security requirements for companies using generative AI. Plus, they’ve got these measures in place, where service providers need to get security assessments and clearance before releasing mass-market AI products. They’re not messing around, man.
Next up, the EUROPEAN UNION. They’re planning regulations too. European lawmakers are working on new AI rules, identifying which systems will be considered “high risk.” They’re getting closer to reaching an agreement on the AI Act, which is expected in December. And the President of the European Commission, Ursula von der Leyen, wants a global panel to assess the risks and benefits of AI. They’re definitely taking this seriously, man.
Get The News You Want
But wait, man. I gotta tell you something. You can get all the market moving news you want with a personalized feed of stocks you care about. Just get the app, dude.
Alright, let’s keep going. FRANCE is investigating possible breaches related to ChatGPT. The G7, man, those seven countries got together and agreed on a code of conduct for advanced AI systems. They want to promote safe and trustworthy AI worldwide. That’s a good goal, man.
In ITALY, they’re also investigating possible breaches. They’re planning to review AI platforms and hire experts in the field. And Japan, they’re expecting to introduce their own regulations by the end of 2023, more aligned with the US than the stringent ones in the EU. They’ve warned OpenAI not to collect sensitive data without people’s permission, man.
Now we’re in POLAND. They’re also investigating possible breaches. Poland’s Personal Data Protection Office is looking into OpenAI based on a complaint about EU data protection laws. And finally, SPAIN, dude. They’re investigating potential data breaches by ChatGPT. They’re not playing around either.
Oh, and guess what? The UNITED NATIONS is planning some regulations too. They’ve created this advisory body with tech company execs, government officials, and academics to figure out the international governance of AI. The U.N. Security Council even had a discussion about AI and its implications for global peace and security. It’s a big deal, man.
Now let’s talk about the good ol’ U.S. of A. They’re seeking input on regulations, man. They’re gonna launch an AI safety institute to evaluate the risks of these “frontier” AI models. And President Joe Biden, he issued an executive order to require developers of AI systems that pose risks to share safety test results with the government. They’re not messing around either, my friends.
The U.S. Congress held hearings on AI, featuring Mark Zuckerberg from Meta and Elon Musk from Tesla. Elon Musk even called for a U.S. “referee” for AI. And the U.S. Federal Trade Commission, they opened an investigation into OpenAI for possible consumer protection law violations. It’s a wild world out there, man.