How’s this for a wild story? Major tech companies that are kind of a big deal have joined the U.S. AI Safety Institute Consortium (AISIC). They just announced that Google, Amazon, Meta, OpenAI, IBM, and Microsoft are on board and they’re joined by 200 stakeholders in the AI industry. Crazy stuff.
The consortium is going to be underneath the U.S. AI Safety Institute (USAISI) and will help with some of President Biden’s priorities like developing guidelines for capability evaluations, and risk management. This is some next level stuff, man.
And get this: there are more than just tech companies involved. There are also academics, government and industry researchers, along with AI startups, companies, institutions, and non-profits. The diversity here is nuts!
Here’s the thing, people: companies want to take a multi-stakeholder approach towards AI governance because there is a growing push toward AI governance. Companies and international organizations are all on board with this plan. For instance, India’s IT Ministry is also trying to get in on the AI action. It’s not just the big companies either; they’re also including researchers and academics. This is some serious stuff, man.
There was a major thing that Joe. ..I mean, the President of the United States did back in October 2023. He issued an Executive Order laying out some really serious plans for ther AI scene.
Now get a load of this: developers of the “most powerful” AI systems have to share their safety test results with the U.S. government. And they have to do so before they release their models to the public. It’s some real heavy stuff. The US Department of Commerce is going to be involved in developing guidance for content-authentication and watermarking to label AI-generated content.