Navigating AI Safeguards in your law firm
Dustin and Trey sat down to have a conversation about the White House’s decision to create AI safeguards and how it relates to law firm tech in this short transcribed call. The conversation has been lightly edited.
Trey: Did you see that the White House now requires agencies to create AI safeguards and appoint Chief AI officers?
Dustin: Oh yeah, that's a good one, we're doing several AI projects with law firms right now. Interestingly just talking around the Chief AI officer, this is normally when you see an officer in a business, right? It's that person who is dealing less with research and more with the risks, as well as how we monetize it, and things like that. Currently, the big focus we've seen from law firms has been around not shooting themselves in the foot with AI. It's been less about, you know, hey, how do we, Clear Guidance, come in and make AI run their practice better and more about, hey, Clear Guidance how do we keep AI from leaking our client data or compromising our ethics you know, things like that. We've also been doing a lot more evaluating products, helping write the policies of what is and isn't allowed at the firm, and researching such. An example is “Oh, this Westlaw product is really interesting, but how does the AI really work?” things like that. Law firms are almost never cutting-edge in technology. Which is not surprising, but we'll have other industries like in finance where they're asking us to, you know, how can we use AI to streamline things, make things go faster, not have to hire more people, things like that. And law firms are taking the more conservative approach again, which they normally do in technology, and trying to kind of keep AI from doing too much damage.
Trey: That’s all great information. If your firm needs help evaluating products, writing policies surrounding AI, or anything AI related. Clear Guidance Partners would love to be a resource for you. Fill out this form to learn more: