2023, man, it’s been a wild ride for large language models (LLMs). We’ve seen the release of all these crazy models like GPT4, ChatGPT Enterprise, Google Bard, Microsoft Bing Chat, and Meta’s LLama 2. But what’s next for AI? Well, my buddies Tim Keary and Neil Hughes have some predictions for us.
First up, Tim says that all the hype around generative AI is going to crash and burn. People are starting to realize that these LLMs and the idea of artificial general intelligence (AGI) might not be as mind-blowing as they thought. The number of visits to the ChatGPT website has been dropping, and even its competitor, Bard, ain’t doing so hot. Plus, we got folks getting all anti-AI, concerned about the impact of automation and ethical issues. Hollywood writers and the Writers Guild of America (WGA) are putting up a fight, man. OpenAI is even getting hit with lawsuits over allegedly using copyrighted material to train these AI models. It’s a whole mess, dude.
Now let’s hear from Neil. He just came back from Estonia, and he’s blown away by their digital services. Like, 99% of their public services are available online. It’s insane. But here’s the thing, it’s all based on digital IDs, man. And that opens up a can of worms when it comes to privacy and surveillance. They want to introduce Central Bank Digital Currencies (CBDCs) and carbon credit scoring, and it’s cool for financial innovation, but it’s also raising some serious ethical questions. Having all this data centralized, combined with the power of AI, it’s like having an always-on investigator, probing into our lives. And it’s not just a technical issue, bro, it goes deep into cultural and political stuff. We gotta have a real debate about privacy and government intervention here.
Now, Tim’s got something to say about hackers and jailbreaking. Since ChatGPT came out, everyone’s been trying to find ways to jailbreak these LLMs. They’re using techniques like Do Anything Now (DAN) prompts to generate all sorts of crazy stuff that goes against content moderation guidelines. Discriminatory output, cybercrime instructions—you name it. And now that we got these multimodal LLMs coming out, hackers gonna have even more ways to exploit them. It’s like a whole new attack surface, man. We don’t know how big the risk is, but it’s there.
Okay, Neil’s back, and he’s got thoughts on where AI is headed. He’s not entirely sold on the hype, but he thinks businesses are gonna start figuring out how to use AI to their advantage in 2024. It’s like how mobile apps started out all gimmicky, but now they’re an essential part of our lives. AI is going down that same road, man. It’s gonna solve real problems, make our lives easier. 2024 might just be the year when AI really takes off.
Now, let’s talk about the gap between open-source and closed-source AI models. Tim says that open-source AI is closing in on the big boys. Models like Llama 2 and Falcon 180B are showing that open-source alternatives can compete with proprietary models like GPT-3.5 and Bard. They might not be as powerful yet, but with Google and OpenAI working on these powerful multimodal models, open-source AI is gonna become a real force to be reckoned with.
Lastly, Neil’s got some news for us on the cookie and password front. Google’s making moves to phase out third-party cookies from Chrome by 2024. Instead of following us around the internet, websites will interact directly with our browsers to figure out our interests. It’s gonna be more private and secure, man. And not just that, passwords are becoming a thing of the past too. Biometric passkeys, like face scans and fingerprints, are taking over. Passwords are just too vulnerable, dude.
So there you have it, folks. The future of AI according to Tim and Neil. 2024 and beyond are gonna be interesting times, man.