In the world of technology, everyone is feeling the pressure to jump on the generative AI bandwagon. Businesses fear that if they don’t keep up with the latest trends, they’ll be left in the dust by their more innovative competitors. But what they don’t realize is that along with the benefits of generative AI come a whole host of issues that we’re not yet equipped to handle.
One major concern is the potential for sensitive information to be leaked through large language models. These models require massive amounts of content to perform tasks like answering queries or summarizing documents. This means that law firms, banks, and hospitals, for example, might be training models on confidential data containing personal identifiable information, financial details, and health records. How can they ensure that this information doesn’t accidentally end up in the wrong hands?
Take the case of a developer at Samsung who fed proprietary source code to a generative AI model called ChatGPT, hoping to find bugs. After executives at Apple, JPMorgan Chase, and Amazon caught wind of this, they promptly banned their workers from using ChatGPT and similar tools internally. They were concerned that the creator of the software, OpenAI, could train on their data and expose their trade secrets.
It’s a bit like the early days of software security, according to cybersecurity expert Alex Stamos. Back in the early 2000s, companies had line of business apps or product teams with no centralized security control. And now, we find ourselves in a similar situation with AI. Board directors and executives have no idea what risks they’re facing, and it’s nearly impossible for them to get accurate answers.
While generative AI developers are attempting to address these security and privacy concerns, there’s still a long way to go. OpenAI, for example, has promised to encrypt and not train on text in conversations between its chatbot and enterprise customers. But there are still other issues to contend with, such as the tendency of generative AI models to generate false information, also known as “hallucination.”
For businesses that rely on generative AI for customer interaction, there’s also the risk of chatbots spewing inappropriate or biased responses. Content filters can help, but they’re not foolproof. The bottom line is that we’re still in the early stages of understanding the risks associated with AI, just like we were in the ’90s with basic software vulnerabilities.
And let’s not forget about copyright concerns. Many generative AI tools are trained on data scraped from the internet, which raises the question of whether content creators could be sued for generating content based on protected works. This has led to companies like Microsoft stepping up to defend paying customers who may face copyright lawsuits for using their AI tools.
With so many unknowns and various departments within organizations affected, like legal, compliance, marketing, sales, and IT, these conundrums are not easy to solve. The important thing is for businesses to be aware of the hazards and be prepared to address them when they arise. The world of generative AI is uncharted territory, and we’re navigating it with caution.