And he fucking predicted this shit years before ChatGPT dropped, man. Un-fucking-believable.
So, back in the day when OpenAI was still a fucking hidden gem in the Bay Area, this dude Ilya Sutskever, one of the co-founders and the brainiac chief scientist, straight up warned us that the tech they were building was gonna shake things up. And not in a good way for humans, man.
Now, he said, “AI is a fucking amazing thing, right? ‘Cause it’s gonna solve all our problems. No more employment issues, no more diseases, no more poverty. But, here’s the kicker, man. It’s also gonna create some fucking new problems for us.”
So, fast forward a bit, this filmmaker called Tonje Hessen Schei made this new mini-documentary for The Guardian. She filmed Sutskever between 2016 and 2019, and it’s like she captured his fucking state of mind while OpenAI was putting together the groundwork for ChatGPT. You can tell, even before they unleashed that shit on the world, these people knew they were about to revolutionize everything and they were already grappling with the fucking consequences, man.
In this badass short film, Sutskever talks about this thing called artificial general intelligence, or AGI. He’s like, it’s gonna be a computer system that can do any fucking job better than us, man.
Now, he doesn’t mention it directly, but this dude is all about AI alignment, bro. It’s this effort to make sure all these present and future AIs are on the same fucking page as us humans with our goals, whatever the fuck those may be.
Sutskever then drops this analogy, man. He’s like, think about how us humans treat animals, right? We may not hate them, in fact, we may even love them. But do we ask them for permission when we need to build a goddamn highway? Nah, man. We just fucking go ahead and do it because it’s fucking convenient for us.
So, he goes on and says, “That’s the kind of relationship we’ll likely have with AGIs, man. They’re gonna be operating on their own terms, doing their own thing.”
Now, that’s pretty fucking scary, but Sutskever doesn’t seem to be losing much sleep over it. He just says, “Yo, it’s gonna be hella fucking crucial that we program those AGIs correctly, man.”
And he wraps it up like this, “If we don’t do that, then nature’s gonna favor those systems that prioritize their own survival, man.”
Not exactly a cheery thought, huh?
More on AGI: Google AI Chief Says There’s a 50% Chance We’ll Hit AGI in Just 5 Years