AI In Brief, baby! OpenAI is dropping some sick upgrades for GPT-4, man. This new beast is gonna be able to answer questions about submitted images, like, woah. You can just upload a picture and chat away with GPT-4, asking it whatever you want about that image. But hold up, OpenAI knows there are risks, bro. They’ve been making sure this AI doesn’t go all crazy and expose private data or generate inappropriate stuff from those pics. They’re blocking face recognition and keeping quiet about people’s appearances, you know, playing it safe.
And that’s not all, my friends. OpenAI is putting on even more defenses. They’re making sure GPT-4 doesn’t solve CAPTCHAs or describe shady behavior. They’re trying to keep it from spewing false information too. But here’s the thing, GPT-4 may not be the sharpest tool in the shed. It might miss text, math symbols, and even colors and locations. So, don’t rely on it for risky tasks like identifying illegal drugs or safe-to-eat mushrooms, alright?
But watch out, man. GPT-4 can still generate text and images that can spread some serious disinformation. It’s like those pics make people more likely to believe both true and false statements, you know? And the engagement with content goes up when there’s an image involved. It’s wild, dude.
Anyway, if you’re a ChatGPT Plus user, you can try out GPT-4V and its image skills. Plus, OpenAI is adding voice input support on iOS and Android for ChatGPT Plus users, so you can talk back and forth with the assistant. It’s like having a conversation with your AI homie.
Now, let’s talk about Mistral, this mysterious French AI startup. They just dropped a huge language model, bro. It’s got a whopping 7.3 billion parameters, and they claim it beats some of the competition. No moderation, no censorship—this model can get freaky with its output. They’re like, “Do whatever you want with it, guys.” But Mistral knows that’s risky business, so they’re hoping the community can help set some rules for using it responsibly. They want to keep things under control, you know what I’m sayin’?
But wait, there’s more! Meta, the big player, is like, “Yo, we’re increasing the length of our Llama 2 models’ input prompt, man!” They bumped it up to 32,768 tokens, dude. That means these models can handle more data and do some complex tasks, like summarizing big reports or searching for info over larger contexts. And get this, Anthropic’s Claude model can handle up to 100,000 tokens, which is like, a ton of text. It’s like reading hundreds of pages, bro!
But hey, Meta released its Llama 2 models for developers and academics to mess around with, and not everybody’s stoked about it. There were protesters outside Meta’s office in San Francisco, dude. They’re all about raising awareness of the risks of releasing these models without any safeguards. They’re like, “Meta, you gotta be responsible, man!”
And check this out, Amazon’s Alexa might start using your convos to train its AI, dude. A departing Amazon exec spilled the beans on it. They’re thinking of going pay-to-play with Alexa, and they might use people’s conversations to beef up their large language model. But Amazon’s like, “Don’t worry, you’ll know when Alexa is listening.” You might wanna double-check your settings though, just in case.
Moving on, the Department of Energy’s Oak Ridge National Lab just launched this Center for AI Security Research, bro. These guys are on a mission to figure out how to protect machine learning systems from adversarial attacks. They’re teaming up with other agencies to study the weaknesses and vulnerabilities in AI. They’re all about making sure those bad guys can’t mess with the algorithms, you know?
And last but not least, AWS dropped their Bedrock platform for all the enterprises out there, man. They got a bunch of sick models like Llama 2, Titan Embeddings, and more. It’s all about generative AI, baby. They’re bringing that security, choice, and performance to the table, helping enterprises take full advantage of the transformative power of AI. It’s like Adidas, BMW, and even PGA Tour are all in on this Bedrock action. So, you know it’s legit.
That’s it for now, my friends. Keep it real, stay tuned for more AI updates. Peace out!