So look, OpenAI just announced some big changes at their first Developer Day. They introduced this new GPT-4 Turbo model, some new Assistants API, and multimodal capabilities, among other things. The GPT-4 Turbo model is like a souped-up version of the original GPT-4, but now it’s more capable, cheaper, and supports a 128K context window. OpenAI has also made the model more affordable with input tokens costing 3x less and output tokens costing 2x less. And that’s not all – there are price drops across the whole platform. They also introduced some new function calling updates and this new Assistants API, which is supposed to help developers build agent-like experiences within their own applications. It’s in beta now, but it’s open to all developers. They also added some new multimodal capabilities including vision, image creation, and text-to-speech. And they even launched this thing called Copyright Shield, which will defend customers and pay the costs incurred if they face legal claims around copyright infringement. So if you want to learn more about all this stuff, you can check out the keynote or the documentation on OpenAI’s website.