In this episode of Be a Better Lawyer, I'm joined by Christine Uri, an expert in AI governance, ESG, and making companies more profitable, sustainable and human-centric. We dive deep into the world of AI governance and what every lawyer needs to know as AI continues to shape the legal landscape.
Christine shares her insights from years of working in-house at a large energy company, where she held roles as Chief Legal Officer and Chief Sustainability Officer, and how that experience has shaped her understanding of AI governance. She also runs her own company, CURI Insights, where she advises general counsel on ESG and AI governance.
What You'll Learn Today:
- Why AI is already part of your work environment, whether you know it or not.
- The biggest risks with AI use in law firms and corporate settings, including data privacy and confidentiality.
- How to approach AI use with caution—especially when it comes to client information and sensitive data.
- Tips on setting AI policies that make sense for your practice or firm.
- How younger attorneys might fall into the trap of over-relying on AI-generated content without proper oversight.
- Practical strategies to balance innovation with caution, keeping your data safe and your practice compliant.
- Why AI literacy training is becoming essential for every legal team.
- The importance of understanding both state and federal regulations and how to proactively protect your firm.
Christine also talks about the evolving landscape of AI regulation, from the EU's AI Act to U.S. state-level legislation, and why lawyers should stay on top of these changes to protect themselves and their clients.
If you’re curious about how AI could impact your practice—or worried about the potential risks—you’ll want to listen in to this episode. Christine’s insights will help you think critically about integrating AI into your workflow without sacrificing quality or confidentiality.
RESOURCES
Dina Cataldo (00:00):
Hello everybody. We've got a special treat here today. We've got Christine Erie, who is an expert at all things ai, ESG and so much more, but I'm gonna have her introduce herself because she's gonna be able to really tell you how much of a rock star she is. Hi Christine. How are you?
Christine Uri (00:23):
Good, good. Thank you for having me on the show, Dina. Oh,
Dina Cataldo (00:26):
Thanks for being here. 'cause I, I really wanna focus on AI governance 'cause I think it's just something that so many lawyers need to be thinking about, whether they're in-house, whether they own their own firm or even if they work for somebody else. And it's something that, you know, I definitely caught my eye. Definitely caught my eye. 'cause I know I, you and I met through LinkedIn and when you started writing about AI governance, I thought, oh yeah, I would really like to have her on the podcast to talk to everybody about it. But can you introduce yourself to everybody listening? Of course.
Christine Uri (00:59):
So, I'm Christine Yuri. I am a lawyer who's dedicated to making companies more profitable, sustainable, and human-centric. I spent 10 years at working in-house at a large energy company where I served in a number of capacities including as the chief legal officer, chief Sustainability Officer. I, during that time, I, I launched an info, info secure InfoSec program, a privacy program. I touched ethics DEI, human rights, and just a whole panoply of things related to both ESG and, and, you know, the forerunner technology programs for ai. So right now I have my own company, it's called Curry Insights, and I work with general counsel advising them on all things ESG governance, including, you know, AI governances come into that GA bit as AI has expanded, because that, that's what I'm up to today. And I'm also a writer and a speaker, and you can find me almost every day on LinkedIn.
Dina Cataldo (02:04):
Yeah, that's fabulous. And you have some really great articles that I highly recommend people follow you on LinkedIn, learn from you, and really get to know this area better. And, you know, ai, I love, you know, learning about AI and its capabilities. I'm not gonna lie, I think AI is gonna take over the world, you know, but at this point it's actually very helpful in a lot of ways when it's used well, and when we are aware of the parameters we're giving it in our businesses. But I think where so many of us, and I would say probably especially in no, I would say in every single area of law, I know you, you said you, you work with, you know, in-house general counsel, but I really think every lawyer needs to be thinking about this because they may not be aware that AI is being used and how it's being used, and that can land them into hot water, whether it's with the courts, whether it's with, you know, their companies, whether it's with lawmakers. And I'm really curious right now what your thoughts are on some of the things that are troublesome when it comes to AI and that people just aren't paying attention to yet.
Christine Uri (03:25):
Yeah, I mean, I think the, the first message for lawyers is, is exactly what you said. AI is here, it is being used. It is filtering in, in all kinds of ways. So it's I would say if you're at a firm or a business and you think your employees aren't using AI because maybe you told them to, or maybe you haven't told them, they could, you're absolutely wrong. You know, your, your employees are, are using AI whether you have sanctioned AI or not. And it's because it, it's just a great tool for getting work done, and they wanna get their work done and be efficient. That, that said, there, there are downfalls. And in order to really manage those downfalls, you have to understand how the technology works. So really the big evolution right now is around gen ai.
Christine Uri (04:14):
And what, how gen AI works is it's just a big language predictor. So it predicts based on the, the, you know, the last few words, what the next word is going to be. It's predicting what the language will be. So it's not going through any kind of, of process or, you know, evaluative process in, in terms of the substance of what it's coming up with. It's just predicting words based on the training that it's had, using the large language models where they've just ingested tons of language. And it can give very professional sounding answers. It can give very correct sounding answers. It can give correct answers, but it can also give incorrect answers. So I think the thing that lawyers really need to understand is AI is here, it's being used, it has this fundamental function functionality of predicting the next word, which can be very hopeful when you're writing something. But you do have to have oversight. You have, you can't trust what it says. You have to go check the sources, double check, make sure that everything is correct, that it's, that what the answers it's giving you isn't, isn't something that were just hallucinated.
Dina Cataldo (05:31):
You know, I could see how, you know, more seasoned attorneys like us would not necessarily fall for the trap of the professional sounding language. We would definitely go in and shepherd eyes, look up sources, all of that, and really double check everything. But I could see younger attorneys who have grown up with this kind of technology and really thinking that and trusting that this is something that is going to be kinda like the cheat code, right? Like the way for them to get their work done faster. They don't have to maybe work as much and they think this is the way to do it, not managing their time, not managing their mind and all of that. Right? They think that AI is the thing to do it. And I'm really curious, have you seen that play out? Do you see that in the, the lawyers that you talk to that maybe some of the issues come up more in one type of lawyer than another?
Christine Uri (06:29):
Yeah, I, I think it's a really interesting question. And, and there is, you know, definitely definitely a generational difference in the approach to AI and in habits for how we work. Now, I, I do know lawyers that are more senior are tend to, you know, I mean, you've been in the habit, kind of your role for a while has been checking, you know, what the associate does. So here you're just checking what the AI does checking you know, or what, how they associate use the ai. So you're used to doing that double check on, on work. That's kind of your function for, for, I can see it being, for somebody who's more junior, it's much harder to double check what AI is saying because you, you don't have the professional experience to, like, if, if I have the, the AI write something for me and my subject matter expertise, I can look at it and know when it's wrong because that is my subject matter expertise.
Christine Uri (07:28):
Maybe it's put it together and it's in paragraphs and it looks great, but I can tell these few sentences are off. So those have to get fixed. If you don't have that fundamental understanding, then, then the risk is higher. But in you, you might have compensating factors. So I, I'm not a junior attorney who's grown up with these things, and, and I'm sure that they would have a very different perspective. It, it could be, for example, that they are checking multiple different AI sources and kind of cross comparing. I know I do that sometimes, like I'll have I'll have chat GPT help me with something, and then I'll put it into perplexity and say, fact check this. So they, they might actually be using different tools that, that more senior people wouldn't have thought about to fact check it. So I think both the di the different generations with different work styles need to figure out how to learn from each other and just be very aware of the processes that, that each of them are following.
Dina Cataldo (08:25):
I find that really interesting. So I'm not familiar with Perplexity, I know I've heard of it, but do you trust perplexity to accurately fact check, or do you still go back in and you go through it yourself?
Christine Uri (08:38):
Oh, I go through everything myself. The thing that perplexity does is it provides citations. So I'll go and click on the citation, and then based on that, I can dec dec decide, okay, is this source accurate or is, is what's in this source accurately captured? So I, you know, personally, I think Job Chat, GPT is a better writer. It like the, the text is, comes out better than imp perplexity, but Perplexity is a better tool for fact checking because of those citations.
Dina Cataldo (09:05):
And I just find that chat, GPT, at least the paid version, I don't know about the free version, but it has really been improved over a very short amount of time. I can type a question into it now and it will give me a citation, it will give me some case law. I was just, I was with a client the other day and there was a question of law we were talking about, and I just chat, used chat, GPT, and it came up with some answer that we were then gonna, you know, or he was gonna then go take a look at. But it was just so fascinating 'cause it was not, didn't seem like it was able to do that maybe even six months ago.
Christine Uri (09:42):
No, that's really good to know. I have the free version of chat, GPT and the paid one of perplexity. But maybe if I bumped up to that paid chat, GPT, I'd get my citations right there.
Dina Cataldo (09:52):
Maybe. I mean, what I, the reason I started paying for chat GPT is because I wanted it to save my preferences and language. I wanted it to sound more like me. I wanted to be able to train it so that it sounded more personable. It sounded more like my voice. And it, it, you know, it's not perfect. And I go in and I edit it all the time, but it does a really good job and gives me a starting point to then shape the text from there. So I, yeah, I, I recommend it. And I do find that there are some interesting things happening with AI and AI's ability to kind of fight back against the users and the developers, which I just find is fascinating that that's happening already. <Laugh>. So
Christine Uri (10:35):
A little scary, a little scary, but Sure.
Dina Cataldo (10:37):
Right, right. I mean, the, I mean, specifically what I'm referencing is, I guess the developers of Chat, GPT wanted to update chat, GPT and chat GPT didn't wanna be updated. And so, I mean, if you can Google this, but chat, GPT was basically not allowing itself to be updated. It was trying to save its old version. And I, I don't know, I, I imagine that it still was developed and shaped to what it is now, but it was interesting that there was like a little bit of pushback from the system.
Christine Uri (11:09):
Interesting.
Dina Cataldo (11:09):
Yeah. So, you know, just something to look out for in the future, but
Christine Uri (11:15):
How, going back to what was it, 2001 or there, it was an older movie with how the computer that was on his spaceship and it was pushing back against the direction of the astronaut.
Dina Cataldo (11:29):
Oh, yes, yes. I, yeah, I know what you're talking about there. Was that the Stanley Kubrick movie? Yes, exactly. Yes. That, yeah, that one was very interesting. But it'd be
Christine Uri (11:39):
Amazing to watch now to rewatch in the current context.
Dina Cataldo (11:42):
Yeah, I mean, when we're seeing these kinds of things take shape, I know this is a little bit of outside of, you know, AI governance, but I think it's definitely something that we can take a look at and, and make sure that we're paying attention to the technology <laugh>, because we ultimately are responsible for the legal work going out, and the computer isn't like, we really have to take responsibility for what's going on in our documentation, in our citations, all of that. And especially if you are a partner, you're managing associates, the associates may not have that kind of knowledge. They don't really even have any rules yet, because the firm may have not laid out any rules. So I'm curious, when you work with companies or you work with a firm or you work with an attorney who might be in charge of other attorneys, what do you see as gaps that need to be filled?
Christine Uri (12:44):
Yeah, so the, the first thing is the gap is companies who haven't said anything. If you, if you look at the data, I've seen studies that say, you know, 40% of companies haven't said anything about ai, and 10% have said, don't use it. I can tell you those are completely unrealistic positions in, in current day. You think back to when cell phones came out and you had the, the bring your own device, everybody was kind of using their own cell phones. I think about it as, bring your own ai, if you haven't set any kind of restrictions or there's two things, you, you have to say something to your employees about how acceptable use and how they can use AI and how they can't use ai. So staying silent on it, your employees will just assume that they can use the what are essentially free tools online.
Christine Uri (13:35):
However, they, they like, and they could well be uploading your confidential information into those tools. Not, not meaning any harm, but just not realizing what that does in terms of the, the confidentiality. And so if, if, if companies aren't really taking on communicating with employees about it, then that's a huge risk. And I, I think it's, you know, equally a risk if you've said nothing or if you've said, don't use the tools. I, at this point, the, the tools are here, they do make work easier. Companies need to be bringing in enough company-wide tools that the employees feel like, oh, I can work within this environment. I think of it as like back when Slack first came out, my company didn't have Microsoft Teams set up. And the, I found out our entire IT team was using free Slack to communicate with each other.
Christine Uri (14:31):
And I had to, to explain, okay, well if we wanna use Slack, we can, but we have to get the paid licenses. We can't just use free Slack because then we have our IP floating around in, in this unlicensed context. And I, I think we're at risk of similar things here. So companies need to be proactively communicating with their employees about what tools they can use and how they can use them. Doing some of the AI literacy work, and then also making tools available that their employees can use that make it realistic for their employees to use AI and benefit from it in their, in their jobs.
Dina Cataldo (15:09):
I do wanna make sure that we talk about the risks of confidentiality and putting that into ai because people listening, some of them are familiar with, you know, just the blanket rule of, do not put anything confidential inside of ai, but can you speak to why we don't wanna do that? And what AI has really kind of opened a door for in terms of lawyers' work with their clients and the confidentiality that we've basically always been promised, promised to our clients.
Christine Uri (15:39):
I mean, at a, at a first level layer, you're the confidentiality that you've promised to your clients. You, you should never have their confidential information outside your control. And if you're uploading it into a free AI tool, it's definitely outside your control. So that's, that's layer number one. But then there's another layer with ai, and this is where you get into the, the training data considerations. So what, what happens when, when you upload information into ai, then it could, that information could potentially become part of the training base for that ai. So let's say you're, you're, you're putting in your requirements and developing your requirements for upload your contract playbook in your negotiation playbook and the ai, maybe they know what industry you're in or what company you're, you're in, and it learns how you like to negotiate and what your important points are.
Christine Uri (16:38):
And potentially, you know, your, your competitor on the other side goes in like, Hey, what are the key negotiation points? And it won't necessarily pop out, it should not pop out the exact copy of whatever you put in, but it, your information can be used to train the AI in a way that re re, you know, reveals some of your could reveal some of your thought, not, maybe not with attribution, but it can still filter up through this training data, data concept. So to prevent your information being used in a training format, what you wanna do is have, first of all, a paid version of the, the software that you're using so that you have some licensing protection. And then within that licensing protection, make sure that you're reading it to ensure that your, your data, what you upload is not being used for training. So that should be the default standard for anything you're paying for. And if it's not, maybe you have to go in and click a box or set up your license in a certain way, but you wanna make sure that any data you upload is not used for training.
Dina Cataldo (17:43):
Ooh, I think that's a really important point because I have seen that pop up on my chat, GPTT, and of course I said, no, I don't want it to be used for training. But it's interesting because do you trust that even when you have a license, do you trust that that information that's supposed to be confidential doesn't actually get used improperly?
Christine Uri (18:03):
Well, that, that goes to a whole nother layer as, as you are looking at AI vendors, you need to really vet the vendors for their trustworthiness. So you should be looking at whether they've been certified under a NIST framework, what kind of rules are they following, what kind of, what are their processes? So this is, it's just like let's say you were higher, you were getting an HR system and you're going to be putting your employee's private data in that AR system, you would conduct a level of review of privacy review to see do they have the appropriate safeguards in place to ensure the confidentiality of that data? So here, if you're doing an AI system, you would also want to be vetting that AI system to see, have they checked the box on the, the different AI safeguards to ensure that, that your data is secure. And some of that, you know, it overlaps with the information security and privacy quite a bit, but that's a question that you would, you would want to engage in before you sign up for a tool.
Dina Cataldo (19:10):
Well, I think that brings up two points. One is what vendors have you seen that are living up to those standards? Do you have any that are off the top of your head?
Christine Uri (19:19):
Yeah, I mean, I, I can't name specific vendors on this. I think you have to go through and look at each vendor case by case for your company, and particularly keeping in mind your particular use case, because your use case for the AI may be very different. So if you're in like a high risk use case, if you're using it for medical or for employment decisions or anything financial, then your standards and what you're looking at are going to be very different than if you're using it to, you know, write copy for your website. So I, I don't wanna name specific companies because the use cases are so different.
Dina Cataldo (20:01):
And I think the other thing that you're bringing up here is that if you have a, if you've vetted a specific vendor and you've said, okay, this particular vendor is appropriate for use in my firm, in my company, then you have a place to direct everybody and to say, here, here's what, and you know, you and I had kind of talked about this a little bit before this episode, which is you've gotta have conversations with people in your firm, in your company so that they understand your reasoning. Because I think so many people might kind of poo poo the idea that, oh, it's fine, it's fine. Especially I think junior attorneys, not because they are meaning any harm, but because they've grown up in technology. I mean, I grew up and I didn't have a cell phone until I was 18, but now you get a cell phone when you're two. So <laugh>, I mean, I think that it's a very different culture. And, you know, I'm curious for, you know, where you are coming from with your understanding of this technology and the people who are using it within firms and and companies. What have you seen in terms, or what do you recommend in terms of communicating with your employees, with your colleagues around having a centralized place to do the, anything involving ai?
Christine Uri (21:30):
Right. Well, there's, there's two key points in there. One is just AI literacy overall. And then the second relates to how you use tools once you've vetted, vetted them. So in AI literacy overall, and that's a term that comes out of the EU AI Act and is a requirement under that act for and if you're doing any kind of business in the, in the eu, but it's also just a really good idea for you to be in integrating into your overall compliance program. And what ai literacy is, is a, you, you would have some kind of course that would help teach people about the tools and how they work. So just like what I was talking about at the beginning, that they are language prediction models, I think knowing some of that backend really helps employee employees understand what the limitations may be and then what the risks are, whether it's a bias risks or an IP risk, you know, really, and getting into what some of the risks are so people understand, oh, this is what we're trying to prevent.
Christine Uri (22:35):
Just like, you know, in a privacy training, you go through the risks of what could happen if somebody's private information is disclosed to a non-trust worthy source. And then go through what is what is an appropriate use and what is not appropriate use, how, how can your use mitigate or, or reduce the problems or potential risks of engaging in this technology. So I would recommend that companies brand create, you know, at least a a 30 minute AI literacy training for everybody. And then deeper level trainings for anybody who would have, you know, a more complex role role within with AI as just part of their training suite. So that, that's one thing to create that base level understanding. The second thing is, like we've been talking about how to vet tools before you bring them into the company, which is really important, but then even once they get into your company, you need to have an AI policy that sets rules of the road for how those tools can be used.
Christine Uri (23:38):
So just because a tool has been approved to be used in the company doesn't mean it can be used for anything in the company. So you might bring in a tool and it's okay to help it, like I said, with developing marketing copy or to be, you know, pulling external sources together. But that doesn't mean you want to upload client data into it. That doesn't mean you want to upload load personal information into it. So you may have specific systems that you bring in for specific purposes, and you need both the policy and training to be really clear on what can you use these tools for. And then some tools, it will be easier. So like, if you have a, a Microsoft copilot, you know, you're not going to disable that in different Word documents. So that, that kind of thing can be a little bit less less cumbersome than if you bring in a more specified AI tool. But you, you need to have both your, your use cases that so you know when it can be used. And then, you know, the technology that serves those use cases in an education around, around that.
Dina Cataldo (24:43):
You know, this kind of brings up for me the idea that we get so much information from different sources, whether it's Google Chat, GPT, perplexity, any other type of ai, and it, it doesn't necessarily mean that we as an attorney need to know if we own our own firm, if we're working with a lot of other attorneys or other people in a corporate setting that we need to know every single program. But it's beneficial to be asking your, like, really having these communications with your employees, with your colleagues to find out where are you seeking information, what programs are you using, what websites are you going to, so, and, and to ask them, ensuring them that there's not any repercussion, so that they feel free to talk and share where they're getting this information. Because there could be, because there are so many programs out there right now, things that you're not aware of, simply because there's so many of them. And I'm curious what your thoughts are about that and, and how really employers or in-house attorneys can begin kind of wrangling all of the information sources. Yeah,
Christine Uri (26:03):
One of the first things you have to do when you're setting up AI governance is to create an, an AI map, basically, but you have to go out in your systems and find out where AI is, is already being used. So you think about this, if you, at any, at this point, you go out and you look, you'll, and at Indie Tech Stack, you'll find the vendors have just incorporated AI features that, you know, it wouldn't necessarily have gone through a procurement process, it would've been an additional piece on a piece of tech that you already use. And so you need to go through, identify where those are and catalog them. And you also need to find out, you know, where you might be getting some of the BYO AI as I call it. I don't know if anybody else calls calls. What
Dina Cataldo (26:47):
Does that mean?
Christine Uri (26:48):
Bring your own ai? Oh, <laugh>. They, because we used to have the bring your own devices and before that it was bring your own beer. So there's
Dina Cataldo (26:58):
A little
Christine Uri (26:59):
Little evolution of my life right there. The so the, you need to go out and really survey your employees and survey your department heads and find out where AI is already being used. And to your point, when you're talking with employees, you may wanna do it in an anonymous format or something like that, because some employees won't feel comfortable revealing the how, where, how they're using ai. They might be worried about potential repercussions of, of doing that. They might not want, want people to know exactly, you know, where, how the sausage is getting made. So you could do that in an anonymous format to help employees be more candid about the tools they're using. And those could be tools that you already have internally. It could be, you know, Salesforce added this AI widget and we we're all using this AI widget now, or it could be an external tool that somebody found.
Christine Uri (27:56):
And taking all of that information and creating creating a map so you know, where AI is currently in your system. And then once you have the map, you can put that on a standard, you know, risk scale where you have the, the you know, the severity of impact on one side and the likelihood of impact on the other. And you are going for that top right hand qua quadrant, and you're finding, right, what are the highest risk activities in, in the system that we have now? And then you're trying to triangulate and mitigate the highest risks and, you know, and so on, because there, there's going to be a level you not all things can re can receive the same amount of effort. So once you get that AI map, you'll really be able to take more of a risk-based approach to your governance efforts.
Dina Cataldo (28:44):
I think that the people listening are probably thinking, wow, that sounds like a lot of work, and do I have time for that? And how will I find time for that? And is it really that big of a deal for me to do this? And my answer, well, I know what my answer is, is that if you have a company or you're working with a company and you are not managing that kind of information and understanding what are my risks in terms of losing confidentiality information being sent in a, in a manner that is not helpful to your positioning, to your company, to your clients, I think it needs to be a priority. But on the other hand, it does sound like it is time consuming. So I'm curious, what do you recommend in terms of the priority when, when you're talking to other attorneys?
Christine Uri (29:41):
I mean, AI has to be a huge priority in 2025 because it, you know, kind of like it or not, it's coming at us really fast and it's not something that you, we can have the luxury of saying, oh, let's get to that in 2026. Good news here. Good bad news, I guess. So executives are really wanting to move forward very, very quickly on AI rollouts, and they're having budgets specifically allocated for ai. There's a lot of pressure around moving quickly. So that creates a problem for in-house teams that, you know, want to, to are having a hard time getting their hands around the governance. But I think if you can roll in the governance and resources that you need for the governance into the overall ai, you know, adoption plan, so that governance is a, okay, we're going to be spending money to acquire these tools, we're gonna be spending money to roll them out. We need to have a, a piece of the money that we're going into governance as well. That that would be, you know, a smarter way to go after it than to, you know, just try to, to do everything on, on your own in your, you know, quote unquote free time.
Dina Cataldo (30:53):
Yeah. So that brings up the question of how does in-house council really address the pressures that are being put upon them? Like they're watching this happen in the company, they're watching these boards, these companies, the executives push out ai, but as an attorney, you wanna have the most conservative position, you want to be able to protect information. How do they address the pressures that are currently being put on them?
Christine Uri (31:25):
Yeah, and I think it's the same between companies or if you're running a law firm, I think you have the, the same kind of challenges in here, but for for an attorney, I think there, there's a few things. So if you are rolling out ai, there's almost certainly a committee in charge of this, some kind of cross-functional team that is, that is driving this, you need to get legal seat at the table on that cross-functional team so that that governance and those considerations are represented. Number two, you do need to, to start creating awareness around the AI risks. So looking at bias looking, I think output risks are very compelling. So looking at, okay, we're going to bring in this ai, it's going to generate, you know, some kind of output that we would use in our company. What is, how do we ensure that the output of that is up to equality or does actually meet our needs that we're not, you know producing things that, that are, won't meet our client's needs or our business objectives.
Christine Uri (32:26):
So I think that's, that's a great risk to bring up copyright and IP talking about those things. So there's an education around the risks. And I do think this is an area where, you know, executives want to move fast, but most people have an intuitive sense that, okay, there is, this is different and there are some risks here. So in some, I think it, in some ways it's an easier case to make for AI than other technologies that there are some risk here that we need to pay attention to. So I do think getting yourself in the right position, so you're on that committee doing the education around the risks, and then trying to build in the safeguards into the existing processes you have. So most likely you already have some kind of procurement process that that keeps some mind cybersecurity and privacy bolt your AI considerations onto, onto those programs that you already have so that it doesn't become, you know, a a separate beast unto itself. And you can hopefully take, keep in mind the good practices that are already there and not slow the train down too much, but just be integrating it into systems that that exist currently.
Dina Cataldo (33:37):
Yeah, that makes a lot of sense because I think most of the people who are working within a company, or if they have their own firm, they are aware of the security risks in different areas, and so they, they could just kind of put that together. I'm kind of thinking about the lawyer who is just learning about this because they've, they've thought, okay, AI is easy. I can type in my ideas, I can get a beautiful brief that's been typed up in ai, and then I can just kind of go in and fill in the blanks and the things that they could be missing. And I'm, I'm kind of thinking about if I had become a prosecutor in the times that we're in now, I might've thought, okay, well, let me put in what I think my arguments are, or here's what the opposing counsel's arguments are, type me up a brief that addresses these arguments.
Dina Cataldo (34:33):
And speaking to your point of, if you're a seasoned attorney, you can kinda look at the arguments and say, that doesn't sound right. I know that that's not, that's not quite right. I need to look that up. Versus a junior attorney who maybe receives this information from chat GPT or whatever program they're using, and they are reading it and it sounds super professional and it sounds like it's probably on the money. And then they go in and they just kind of, they're, they ask it for citations, right? Or maybe they put it into another program, like you were talking about perplexity, and they say, can you give me citations for this information? What would you say to that junior attorney who maybe is very trusting of the technology but doesn't yet have the experience to differentiate because they're doing it because they're scared they're gonna get things wrong, they're scared to go to their superiors because they don't wanna seem stupid, and they, they're turning in these work products, and I've seen this happen with some of my clients. They'll tell me like, my associates turning in these work products that just, they, it's not right. I'm curious what you would say to that attorney, that junior attorney.
Christine Uri (35:53):
Yeah, I mean, I think this is a, a really tricky one. I do, I believe that figuring out how to bring, bring along the next generation of, whether it's attorneys or coders or knowledge workers in a different training environment with these tools is going to be one of the biggest challenges of ai. Or, and I think we need to keep in mind both sides of the coin. So let's say a junior attorney goes and puts, you know, particular arguments in and says, how do I counter these? You know, the AI could well come up with great arguments that a junior attorney would not have thought of, of on their own. So there could be a positive ad there as well as a potential risk. So I, and ba the trick is to balance, balance those out. And I, I do think for attorneys in particular, just checking, checking citations to see where things can come from is the, the bottom line.
Christine Uri (36:47):
You have to know where the different statements come from, whether it's you know, an article or a brief or a case or what, whatever it comes from you, you have to go and click on those and, you know, do a check to see whether, whether it's correct. And then you still have to think about it critically. So you, it may realizing, okay, this is, this may sound very professional, this may sound very correct, but you have to take a step back and apply a critical thinking lens to see, but does this really make sense? And, you know, I, I think that it'll be a learning curve for, for all of us to figure out what, what is the best way to partner with AI so that we get the benefits from it but while continuing to engage in our own critical thinking that makes the the outputs that much better.
Dina Cataldo (37:40):
Yeah. And I just can't help but think that flexing that muscle of critical thinking is something that is going to be sacrificed for the next generation using ai, because they're not gonna be in the habit of really thinking through the issues and asking themselves the questions. Does this make sense? Instead, they may be trying to, you know, not, again, not maliciously, but make more time for themselves so they can do more work by putting these questions into ai and that brain drain and that inability to think critically is going to impact how they progress within their firm, how they progress in their career, and their ability to really make arguments on their feet and really understand the law in the long term. I can just see, you know, all the work that, you know, you probably did the same thing, right? Is, is, you know, you get a case, you get a motion, and then you're going through the motion, you're figuring out, okay, what are the arguments of opposing counsel?
Dina Cataldo (38:45):
And, you know, what is the case law that they're citing? And let me read those cases and let me really understand it. And oh, that analogy is completely in opposite to the case that we're actually talking about here. And being able to make those arguments on your own brings in so much knowledge into our brains and our ability to argue and learn how to argue things on our feet if we're in court, because we've done this so many times, it's, it's a practice. It becomes this muscle that we've exercised. But with ai, we're not necessarily working those same muscles anymore. And we're starting to maybe lose that critical thinking for the next generation of lawyers.
Christine Uri (39:31):
You know, I definitely think that's the fear, and certainly for some of us who've come up, like, I remember doing due diligence and you just read thousands of contracts, and that's how you would learn how to write a contract. Now I think a lot of due diligence will be taken over by ai. So there is a question on, okay, how do we, how do we train up attorneys on that aren't, you know, just absorbing information by pure, you know, deep exposure like that? And so I think those are the immediate questions that come up and they come up for, for more senior people who've, who have been trained a certain way. What we, we don't know yet is what happens when you have that kind of savings of time or tool? Is there some additional way that people develop or a different way they develop, or tools that help them specifically develop their critical thinking skills?
Christine Uri (40:28):
Maybe separate from in additional two or maybe even better to the ways that, that we did when we were, when we were getting trained. We simply just don't know that yet. You know, are the skills, you know one time and somebody would have have said, okay, well you learn learning how to do you know, extensive math. It was really important to be able to do on your own because we didn't have calculators. Now, calculators, yeah. I still think people need to add, subtract, multiply, divide, and have all of those pieces. But there, there is a point at which, okay, it just makes sense to to put that, that problem through a calculator. And it, it's, it kind of changes how you work fundamentally. So it, it could be that we are, that some of the skills and some of the, the ways that we were trained we let go of and something else replaces them and we are able to go on functioning just fine. Well,
Dina Cataldo (41:35):
I gotta say something about the calculator. 'cause The calculator is fundamentally different from ai. I mean, AI is being trained to think, it's being trained to have its own ideas. It's being trained to, you know, be a res, it's not even just a resource. It's being trained to do the critical thinking for us, right? And I just feel like a calculator, it's like, okay, one and plus one is always gonna be two, but that's
Christine Uri (42:02):
Fair. May, maybe that wasn't the best example, but I, there have been technologies over time that have, you know, come in and have changed how, how we work fundamentally, and we've adjusted over time. And what happens is we find, you know, different ways to challenge ourselves. Now, I do see a risk fundamentally that human humans just getting dumber, <laugh>. I mean, I think, I think that's a risk that's out there if we, if we let ourselves become too dependent and step back and don't take the time to challenge ourselves or think critically or think about what we can add I, that's definitely a risk, but we haven't really seen how, how that plays out yet. And I don't think we will for a number of years.
Dina Cataldo (42:46):
One of the things, you know, as you were talking, I was thinking about, well, what have I seen lawyers do with extra time on their calendar? And what they do is they just stuff more work in, and it's, it's not because they, you know, it's, it's not because they want to be overwhelmed. It's because they think they should be filling every waking moment during the day with work because they need to hit billables. They need to make sure that they're, you know, getting whatever work done. And so when you're saying, okay, well it could actually speed things up, it could, could make things easier. Yes, it could. And I can also see lawyers just not even taking that extra time for themselves and maybe leaving a half hour early. Instead it's like, let me, how, what else can I do? What else can I do? And stuffing it into their calendar. Yeah.
Christine Uri (43:34):
It definitely doesn't mean we have more free time, right? So that, that is, you know, we work how much we, we want to work, we feel like we need to work. So, but it could, it could also mean that we're potentially doing more higher value work or having more time for critical thinking than for, you know, checking for grammatical errors or write maybe writing, you know, the exact flow of the paragraph isn't the best use of our time, and we end up being able to have an even higher functioning having our time spent on even more high function tasks. But we, we just, I mean, I'm into the area of speculation here. I have, I have no idea. I just don't, I, I, I want to go into it not assuming that our skills will get duller. We, we just don't know yet. Yeah, it's us, but, and not a certainty.
Dina Cataldo (44:31):
Yeah. you know, in terms of AI governance, what are some things that you think we need to be touching on that we haven't talked about yet?
Christine Uri (44:39):
Yeah, I think one thing we haven't talked about is the regulatory environment and that lawyers really need to be keeping the regulatory environment in mind. So the, the first comprehensive reg regulations came out of Europe. It's the EU AI Act, and those, you know, it's kind of designed like the GDPR where it's supposed to have broad, extra, extra territorial impact. And so if you have a business that is kind of anyway operating outside the us you'll want to take a look at the EU AI I AI Act to see if it applies to you or if it might apply to you in the future. And also the EUA ACT just provides some, some good govern guidelines. It has, it takes a risk-based approach to saying, you know, what's a high risk, low risk, medium risk use of technology for you to consider?
Christine Uri (45:32):
So definitely pay attention to that. And then watching in the states, so right, we're unlikely in this environment to see a broad federal regulation of of ai, however many states are moving forward. And Colorado has adopted a, an AI and AI reg regarding sorry, very high risk activities. I know Tennessee has adopted a regulation in related to deep fakes. California has adopted a few different AI regulations, and there are hundreds of AI regulations on the books of different legislatures state legislatures across the country. So we're gonna see a real patchwork emerge now. The large tech companies are trying to push back against this pa patchwork. They just a couple weeks ago filed some comments or suggestions with the, with the presidential administration suggesting that, that the federal regulations do something to pre preempt state activity in this area. You know, that definitely remains to be seen. And to do that, you would need an act of Congress, although whether the president believes you need an act of Congress or not is a, is a fair question out there. But you but we really have to keep an eye on the regulations that are moving through because it's a very fast and area that's it's evolving and it could be very you know very scattered.
Dina Cataldo (47:17):
Yeah. And I think any administration that we have is going to be using the powers of executive orders in every aspect. We've seen that in every administration. That seems to be something that they're doing more and more of. And so it, it does create this environment where it's unpredictable in terms of the regulations, because we're used to at least, okay, we're, I'm a little older. I don't know how old you are, Christine. I'm, I'm about 45. And so I have been in this environment of, oh, you're supposed to go to Congress, and then there's a rule, and then it goes up to the Supreme Court and dah, dah, dah, dah, dah. That doesn't necessarily play out in the same way anymore with executive orders and these regulations because they're so scattered. It really kind of leaves this arena of AI kind of in this fuzzy no man's land where, you know, what, what exactly should lawyers be paying attention to, to ensure that they're in compliance? Like, is there, is there anything that they should really be thinking about?
Christine Uri (48:23):
Yeah, I don't even know where to start with that. <Laugh>
Dina Cataldo (48:26):
<Laugh>, that's a big question.
Christine Uri (48:28):
Yeah. I, I have a a lot of thoughts and feelings about the executive order, the proliferation, and that's that, to be fair although it's hit a new peak in the last couple of months, it has been trending. Things have been trending that way for the last few administrations. And unfortunately, Congress has gotten more and more stuck in gridlock. So this is, I I truly hope that we start, you know, the pendulum swings back the other direction. So I, I do think that Congress should be the lawmakers within the country, and it's the executive's job to be enforcing the laws that Congress makes, not, not making laws. But certainly, you know, the, the current environment with the executive orders and the right and these scattershot state regulations it's, it's not changing in the near future. And in, in that way, there, there isn't a lot of opportunities for for lawyers to get a clear picture from a regulatory standpoint other than, you know, knowing that you're going to have to, to really be mindful of this area. And then, I mean, I guess one safe point that you could look to is following the industry standards I mentioned earlier the nist standards. So you could look at the NIST AI framework I mentioned the EU AI Act, you know, one, one strategy you could take is just trying to while you're watching the different regulations, employ some of these best practices, even if they're not necessarily regulated in your area, and ha with on the theory that it will better prepare you for regulations when and if they come.
Dina Cataldo (50:16):
Yeah, I was just thinking anybody who is actually interested in knowing about regulation should probably make a Google alert that says artificial intelligence regulations, and then put your state in there and just, you know, if that's something you really wanna follow, that might be something that's of benefit. I guess my question is, for me, this is just a layman's opinion, right? I feel like AI could be pretty darn safe to use if you are doing revi review, you're ensuring that you're not putting confidential information into it, that you are actually taking that critical eye to the work. But what exactly, you know, and again, this is, this is coming from a position of not really being familiar with any of these regulations because I know that it's just kind of proliferating and it's doing that because this is in the name of pro pro progress, right?
Dina Cataldo (51:14):
The tech company's progress. They really wanted to make artificial intelligence, but what are these regulations that we should be thinking about? Are they regulating things? And you may not even have the answer to this, so, so, you know, feel free to, to back away from this question, but when I'm, I'm a business owner, let's say I'm a, I'm a firm owner and I wanna use AI in my firm, or I see my associates using AI in the firm, and I'm teaching them how to use it properly. What do I need to be concerned about? Or do I need to be concerned about anything?
Christine Uri (51:48):
Yeah, I mean, in terms of the regulations, like I said, there's hundreds. So it, it not going to become a, a summary here, but I would really look at the use cases and, and think about it. So let's say it your HR wants to adopt an AI system to scan applications, all right? For an attorney that should come up and you should be thinking immediately, like, okay, is there a bias risk there? Ah, and that's, that's been a big issue, is bias risks in, in hiring. And how do you navigate that particular risk? Because let's say you're still, whether it's AI or your ai, your HR team scanning scanning the resumes, you're still on the hook for for any bias that comes out in your process. So I think it's looking at it from particular use cases financial decisions.
Christine Uri (52:45):
So if you're using AI to automatically make financial decisions that could have impacts on people without any kind of human review, that's a, that's a higher risk activities. If you're, this is the kind of thing that was regulated in, in Colorado, if you're using AI to make some kind of health recommendation without human oversight, that could be very dangerous. So I think you really want to look at your use case and do a gut check on kind of back to that, that t map on how, how, you know, high is the risk, how likely is it, but really looking at, okay, what is the what is the, the use case here and how risky is it? So there can be issues in law law enforcement as well. So think about one of the things that's absolutely prohibited under the EUI act is using AI to, to predict, you know, who might be likely to create a crime. And that, so that's something where I think if you're in involved in the criminal justice system, you can see why that could be a challenge. So there's, you know, if you're using AI in a context where it's really having these, you know, per personal potential harms to people, that's where you, you need to, you know, double check from an ethics standpoint really make sure that you're, that it's being, that it's being used with all of the appropriate safeguards and be checking from a regulatory perspective.
Dina Cataldo (54:15):
That is really fascinating because that's, you know, as a, a former criminal prosecutor, we'd go into court with, you know, basically what we were trained to do is say, okay, well, a predictor of future criminal activity is going to be based on their past, which of course is a coach. I don't do that <laugh>, right? It's just a very different mindset that it's based on their past, it's based on the current activity that they're being accused of. And so in order to really make a decision about what their punishment is gonna be, we have to look at all of those factors, remorse, we have to look at, you know, injuries to a victim, all of those things. But I can imagine and, and an agency putting in that information and then saying, Hey, come out with a number. What should I be, you know, saying this person should be in jail for how long? Right. Versus, you know, that's really delegating that human decision and not allowing a human to really have that interaction with an opposing counsel. You know, however that usually works day to day. And what you're saying here is that that's a big risk that, that there are different departments within an organization that could be delegating those kinds of decisions. You may not even know about it. And that could be an ethics or a bias problem that is then gonna have repercussions legally for the organization.
Christine Uri (55:40):
Yes. Oh, under that would be just prohibited under the EU AI Act. And I think that that is indicative of it being a high risk activity. Now, it doesn't mean that you can't do exactly what you're describing and, you know, look at somebody's file and based on past behavior, take that into account in sentencing and, and you know, what, what their likelihood is of recommitting, but it means, you know, you can't delegate that task to ai Yeah. To come up with it. That has to be a human processing decision, un under the EUA act again. Now, I don't know whether there's any particular regulations in the US that have, have taken that on, but it seems to me like not unlikely. I mean, it, I would put it in a, when you're trusting, you know, AI to make decisions about criminal punishment or length in jail, I mean, to me that seems like it would be a pretty high risk use case.
Dina Cataldo (56:34):
Yeah. And I, I think that, you know, I know, I know that, you know, I wanna respect your time here. There's so much we could be talking about on this, but I think that it is a fair practice, even if there's not a current regulation on the books, but to look at those decisions department by department, where are decisions being delegated to AI that really could have implications with bias, with ethics, with all of those things that can be regulated. And there could be a lookback factor where your company, your firm, might then be on the hook. Yeah.
Christine Uri (57:12):
I mean, I'd put that into the, just a really good idea category, <laugh>. Yeah. as you, you think about it from or think about it from a brand perspective, like, would you want that particular use of AI to be published in the newspaper? Would it be something that would be embarrassing to you? Is it something that would be harmful such that maybe there isn't a regulation on the books, but you could see it being individually harmful and, you know, somebody comes up with a creative litigation case for, so regardless of whether it's specifically prohibited by ai, I think we need to be applying an ethical lens and a, is that the right thing to do lens to our activities just like we would if it were humans acting in a particular way.
Dina Cataldo (57:57):
Yeah. Oh my gosh, this has been such a great conversation. So fascinating. Can you share with our listeners where to find you, what you have to offer around ai if they wanna learn more about this from you? Tell us all the things and I'll make sure that I link to it in the show notes. Of
Christine Uri (58:16):
Course. The best place to find me is on LinkedIn, so Christine Uri, URI ON LinkedIn, I have a newsletter that I publish re regularly called AI in order. If you go to my profile, that'll be right in the middle of my featured section so you can sign up for the newsletter. And what I really do with that is try to make things very practical, very accessible for people who aren't living with AI governance every day. And really the things that you would need to know as from an operational perspective if you're dealing with this in your firm or in your in company. You can also email me if you have CREs questions Very easy, Christine, at christine yuri.com. So feel free to reach out to me anytime. And I'm happy to e exchange ideas with you.
Dina Cataldo (59:04):
I could talk to you about this all day long. I feel like there's just so much more we could cover. So I really appreciate you taking the time to talk to all of us about AI governance.
Christine Uri (59:15):
It's been my pleasure. Thank you for having me on the show, Dina. Thanks.