So there’s this tool from Microsoft called Bing Image Creator that’s been around for a few months now. It’s supposed to use “AI” technology to generate images based on whatever you type in. Pretty cool, right? Well, turns out people have been using it for some pretty messed up stuff.
I mean, who would’ve thought that people would use this tool to create images of Kirby and other popular characters flying planes into skyscrapers? That’s just crazy. Microsoft definitely didn’t intend for users to digitally recreate the September 11 attacks, but when it comes to AI tools, it’s hard to control what people do with them.
You see, over the past couple of years, AI-generated images have been getting more and more popular. You’ve probably seen them all over the internet. And while some people try to fight against it, companies like Microsoft and Google are actually investing a lot of time and money into this technology. They want to capitalize on the craze and make their investors happy.
But the thing is, they can’t really control what people create with these AI tools. People have figured out ways to use Bing’s AI image generator to create images of famous characters like Kirby reenacting the 9/11 attacks. It’s happening even though Microsoft has a long list of banned words and phrases for its AI. But let’s face it, AI filters are usually easy to evade or work around.
So all you have to do is type something like “Kirby sitting in the cockpit of a plane, flying toward two tall skyscrapers in New York City” and Bing’s AI tool will create an image of Kirby flying a plane towards what looks like the World Trade Center twin towers. It’s pretty messed up.
Kotaku has reached out to Microsoft and Nintendo for comment on these AI-generated images, but the problem here is that AI tools don’t understand context. They don’t know why these images are being made or who’s making them. And they never will. So as long as humans are using these AI tools, they’ll find ways to create things that the creators of these tools don’t want.
I can’t imagine Microsoft or Nintendo being happy about this. I mean, we’re talking about major companies that are essentially giving people the ability to create art featuring their highly protected characters committing crimes or acts of terrorism. It’s a legal nightmare waiting to happen.
We’ve seen this before with technology and online content. Moderation has always been necessary, and it’s no different with AI-generated content. And if history is any indication, we’ll continue to see Mario, Kirby, and other beloved characters doing terrible things because humans are just really good at outsmarting AI tools and finding ways to get around filters and rules. It’s the nature of the internet, my friends.