So check it out, guys. OpenAI, this company known for that popular chatbot ChatGPT, just announced that they’re putting together a new team to tackle the risks of artificial intelligence. And let me tell you, as AI keeps evolving and getting more advanced, it’s crucial to address these potential dangers, man.
In a blog post, OpenAI mentioned they brought on board this dude named Aleksander Madry to lead the team. Now get this, this guy was actually taking a break from his gig as a professor at MIT to work with OpenAI. The mission of this team is to analyze and prevent all kinds of catastrophic risks associated with AI, from cyber threats to chemical, nuclear, and biological dangers, bro.
But it doesn’t stop there, my friends. This team is also gonna develop a policy to help OpenAI figure out how to mitigate the risks of these cutting-edge AI models they’re working on. These models, known as “frontier models,” are gonna be the next level of AI technology, surpassing what we currently have available.
Now, OpenAI has always been focused on building artificial general intelligence (AGI), which is basically AI that can outperform humans in various tasks. And let me tell you, we’re not there yet with the current AI systems, but OpenAI wants to make sure they have the understanding and infrastructure in place to handle highly capable AI systems safely.
If you want to dive deeper into this, you can check out the full article on Bloomberg.com.