So check this out, folks. Apparently, Meta, the parent company of Facebook and Instagram, is clamping down on political campaigns and advertisers using their fancy generative AI advertising tools. Can you believe that? According to a company spokesperson in a Reuters exclusive, Meta ain’t playing around when it comes to regulating these ads.
On November 6th, Meta updated its help center to let everyone know about this new decision. They explained that while they’re testing out these nifty generative AI ad creation tools in their Ads Manager, there are certain types of ads that can’t jump on the bandwagon. We’re talking about ads related to things like housing, employment, credit, social issues, elections, politics, health, pharmaceuticals, and financial services. Those advertisers gotta take a backseat, my friends.
You might be wondering why they’re doing this. Well, Meta claims they’re just trying to better understand the risks associated with using generative AI on ads in heavily regulated industries. They wanna build the right safeguards, you know? And I can’t say I blame ’em. Better safe than sorry, right?
Now, here’s the interesting part. Meta’s general advertising standards don’t have any specific rules on AI. However, they do have rules against running ads that contain content debunked by their fact-checking partners. That’s right, they’re fact-checking the ads. Gotta keep things legit, my friends.
But wait, there’s more! Google, not one to be left out, also updated its political content policy. They’re all about transparency, folks. They want all verified election advertisers to disclose their use of AI in their campaign content. It’s a whole thing.
Google’s standards specifically call out synthetic content that tries to pass off as real. And they want those notices to be “clear and conspicuous” so that users can’t miss ’em. But, here’s the kicker – ads that contain inconsequential synthetic content are exempt from the disclosure requirements. So, I guess if it’s no big deal, they don’t need to spill the beans.
Now, here’s where things get spicy. The regulators in the good ol’ United States are considering putting some regulations in place for political AI deep fakes leading up to the 2024 election. They’re worried, folks. Worried that AI on social media could sway the voters with fake news and deep fakes. And let me tell you, that’s a legitimate concern. We gotta keep things real out there in the crazy world of social media.
Oh, and get this! There have been some claims floating around that ChatGPT, one of the most popular AI chatbots, has a left-leaning political bias. But hold your horses, because those claims are disputed like a UFC fight in the AI community and academia. It’s a whole debate, my friends.
So there you have it, folks. Meta cracking down on political AI ads, Google demanding disclosure, regulators fearing deep fakes, and chatbots getting caught up in political biases. It’s a wild world we’re living in, but hey, at least we’re all here together, trying to figure it out. Stay tuned for more.