So check this out, folks. We got this law firm called Michael Best & Friedrich, and they’re like, nah, we ain’t letting our lawyers and staff use ChatGPT on the job. They put the kibosh on it, man! Now, why would they do that, you may ask? Well, they consulted with their clients first, and their insurers had some say in it too. Gotta cover all your bases, right?
Sarah Alt, who’s the chief process and AI officer over at Michael Best, she says they had to ask themselves a key question. Can they make use of these open-source generative AI tools while keeping it a safe place for the staff and clients? And guess what? The answer was a big fat “no”! Yeah, turns out a lot of client companies are a bit skeptical about ChatGPT ’cause of data concerns, you know?
But here’s the interesting part, other major law firms are handling it differently. Some are allowing limited use of this technology on the job, and others are even building or buying their own AI tools. It’s a whole range of approaches, man.
But here’s something everyone agrees on – lawyers and staff need to be trained and fast! Jeffrey Chivers, who teaches AI at Yale Law School, says it’s crucial to develop technical competence in the legal profession. These law firms can’t afford any errors, you know?
So they’re trying to come up with solid internal training programs, and they wanna do it quickly. They wanna avoid any mishaps, like exposing sensitive client data or spouting out inaccurate information. There were even these two New York lawyers who got in trouble ’cause they ended up submitting a court brief with AI-generated BS court decisions. Yeah, that’s not good, my friends.
So, some of these firms are offering in-person seminars taught by outside specialists, and they’re throwing in some video-learning presentations too. It’s all about getting people up to speed on how these generative AI programs work, but also emphasizing data privacy, accuracy, and all those other issues you’re likely to encounter.
One firm, Orrick Herrington & Sutcliffe, they partnered up with this tech training provider called AltaClaro. Together, they’re developing a curriculum called “prompt engineering” that they’re rolling out across the entire firm. It’s about structuring and refining the queries that generate AI content, you dig?
Now, as these firms grapple with this new tech, they’re also working on policies to minimize the risks. For instance, BakerHostetler issued a directive saying their staff shouldn’t be using large language models with client data. They’re worried about breaches and stuff, you know?
And hey, let’s not forget about law schools! They’re trying to keep up with this fast-evolving tech too. Students want to learn all about it, and professors are fully aware of that demand. They’re even offering elective classes like Generative AI for Lawyers. But here’s the million-dollar question: are they gonna integrate it into the existing law school curriculum? That’s the trick, my friends.
Arizona State University got some attention when they announced that prospective students can use chatbots to help with their applications. Talk about being ahead of the game, right?
So yeah, folks, the legal world is adapting to the wild world of AI. It’s a whole new frontier, and these law firms and schools are trying to navigate it like champs. Stay tuned for more updates on this mind-blowing tech revolution!