So, check it out, there’s this new report from Stanford University that’s calling out these big-shot artificial intelligence (AI) developers. According to the report, these developers need to start being more transparent about how they train their AI models and the impact they have on society. And you know what? I totally agree with them.
See, the Stanford Human-Centered Artificial Intelligence (HAI) team is all about this transparency stuff. They’re saying that as these AI models get more powerful, the developers behind them are becoming less transparent. And that’s a problem, my friends.
This dude named Professor Percy Liang, who’s a smarty pants at Stanford, he’s like, “Hey, when transparency goes down, bad things can happen.” And you know what? He’s right. We’ve seen it happen in other areas like social media. When things get all secretive and shady, it can lead to some serious consequences.
But here’s the thing, even though regulators, researchers, and users are all demanding more transparency, these AI model developers are standing their ground. Like, when OpenAI launched this thing called GPT-4, they basically said, “We ain’t telling you how it works.”
Now, that’s not cool, my friends. This lack of transparency means that us regular folks don’t really know the limitations of these AI models. And it also makes it super hard for regulators to come up with meaningful policies to keep this stuff in check.
But here’s some good news. The Stanford team, along with some folks from MIT and Princeton, they’ve come up with this Foundation Model Transparency Index. Basically, they’re evaluating the transparency of these AI models and ranking the big players in the game.
And let me tell you, the ratings are not great. I mean, they’re all pretty poor. But this company called Meta, you might know them as NASDAQ: META, their LLaMA model ranked first with a 54% transparency rating. Not too bad, I guess. OpenAI’s GPT-4, which powers their fancy ChatGPT chatbot, came in at third place with a 48% rating. And Amazon’s Titan Text, which you probably haven’t even heard of, ranked dead last at a measly 12%.
But here’s the goal, my friends. The Stanford researchers want to make these models more transparent. They want to break down this vague concept of transparency into measurable stuff. And you know what? I’m all for it. Let’s hold these AI developers accountable and shine a light on their secretive ways.
So, yeah, that’s what’s going on with these AI developers and their lack of transparency. Stanford University is saying it’s a problem, and I’m inclined to agree. We need to know what’s going on behind the scenes, my friends. Transparency is key.