URMIA Matters

Generative AI- What Should Risk Managers Consider?

August 23, 2023 URMIA - Higher Education Risk Management & Insurance Season 4 Episode 20
URMIA Matters
Generative AI- What Should Risk Managers Consider?
Show Notes Transcript

Join host Julie Groves as she interviews guests Jack Bernard, Andrea Stagg, and Joe Storch. Together they explore the implications of Generative AI and ChatGPT for higher education and what risk managers need to concern themselves with. They share their insights on how these technologies can enhance learning outcomes and foster collaboration. They also discuss the potential risks and ethical dilemmas that arise from using generative models to produce content and interact with students. Finally, they offer some practical advice for higher ed risk managers on how to prepare for and mitigate the impact of Generative AI and ChatGPT in their institutions. But ultimately, our guests tell us why Generative AI shouldn’t keep risk managers up at night and how it can be a labor-saving advantage to us all.

URMIA Members can find the show notes in the
URMIA Library [Login Required]

Connect with URMIA & URMIA with your network
-Share /Tag in Social Media @urmianetwork
-Not a member? Join ->www.urmia.org/join
-Email | contactus@urmia.org

Give URMIA Matters a boost:
-Give the podcast a 5 star rating
-Share the podcast - click that button!
-Follow on your podcast platform - don't miss an episode!

Thanks for listening to URMIA Matters!

Jenny Whittington: Hey there. Thanks for tuning in to URMIA Matters, a podcast about higher education, risk management, and insurance. Let's get to it.

Julie Groves: Hi everyone. I'm Julie Groves, Director of Risk Services at Wake Forest University and the current URMIA President. I'll be your host for this episode of URMIA Matters. Today, we're going to be having an interesting discussion about generative AI and more specifically, whether this new technology presents an opportunity to improve or threaten the higher education landscape. So, get your popcorn out. Joining me today are Joe Storch, Senior Director of Compliance and Innovation Solutions at Grand River Solutions; Andrea Stagg, Director of Consulting at Grand River Solutions; and Jack Bernard, Associate General Counsel and Faculty Member at the University of Michigan. So welcome to the podcast today, you all. Thank you so much for being here. Before we start, why don't you each tell us a little bit about yourself? So, Jack, I'll start with you.

Jack Bernard: Great. Hi, I'm Jack Bernard from the University of Michigan and I'm an attorney in the General Counsel's office. And I teach in a bunch of our schools and colleges. I've been working in the Academy for over 30 years and it's really a privilege to be here with Joe and Andrea.

Julie Groves: And Andrea, why don't you tell us about yourself?

Andrea Stagg: Sure. I almost wasn't going to, but I will. So, I'm a long-time in-house counsel for colleges and universities, public and private, kind of dotting the East Coast. And then last year in 2022, I came over to Grand River Solutions to do some consulting on higher education compliance issues, mostly around civil rights for, you know, campuses all over the country.

Julie Groves: And Joe, it's great to have you back on URMIA matters again. Good to see you. I think you know some of the folks know about you, but why don't you tell us a little bit about yourself?

Joe Storch: Yeah, it's great to be back and great to be here with Andrea and with Jack. I spent about a decade and a half doing in-house counsel work and now I split my time between working on consulting, working with Andrea and a number of great colleagues, and thinking about innovation to reduce risk and to reduce harm that our students experience. And I'm a longtime fan and grateful participant in so many URMIA programs and conferences over the years.

Julie Groves: Well, and we do appreciate your involvement, Joe. You are one of our go-to folks. So, thank you for being here today. So why don't we start with a little bit of an explanation of this technology? What is generative AI exactly and how does it work? So, Jack, do you want to take a stab at that?

Jack Bernard: So artificial intelligence has been around for quite some time. It essentially, it's software and an approach to software that enables machine learning and for computers to gather information and adapt as they go forward. Generative AI, our AI that enables the computer to, for lack of a better word, express ideas. Maybe you don't want to think of them as quite conscious yet, but they're ideas from the software, from the data that the software gets to interact with. And so, it pushes out information and it often sounds like or looks like what a human might produce. And I think that this can be rather alarming for people, but these technologies have been burgeoning for some time now, and I think now the general public has access to interacting with a machine and algorithms in ways that feel more like interacting with a person.

Julie Groves: So, I mean, you know, this has certainly become a huge topic and I think it's something that we're just learning more and more about all the time. And so, Andrea, can you give us some positive real-world examples of this generative AI?

Andrea Stagg: So, I mean, I think there are a lot of good examples of the non-scary side of AI that are just even, you know, out of my wheelhouse as far as how it's being used. And you know, in science and you know, for discoveries. But I can tell you just about my own use that's not scary and useful. I use it a lot for brainstorming. I use it a lot for fun. I use it as a starting point when I'm just sort of having a block. It's helpful for, you know, maybe you need a certain image for a presentation you're giving or something like that and. You can do a Google search that might yield the image you're looking for. Or you can ask generated AI to create the image exactly as you described it, and then refine it according to your instructions so, you know, I have no doubt that people are using AI in ways that are actually saving lives and you know, thinking about diseases and, you know, cures for them and all kinds of better things than I use it for.

But certainly, there are certainly a lot of non-scary uses. I think about how we already use AI all the time. You know, people who say, well, I've never been to, you know, one of these generative AI websites I've never logged in. I've never used it, but if you're on campus and you use Gmail. Well, whenever you respond to an e-mail, if as long as I think there's a toggle button in your settings, you probably get suggestions like thanks or that looks good. Let's meet then or that works for me. Right? And then sometimes, as you're typing it finished, it suggests to finish to the sentence for you. Like, please don't hesitate to call with any questions, right or something like that? Right, please don't hesitate. It will fill in the rest. I just press tab and there it is and. You know, I like that. I like that usage. It's not scary. It is helpful. It does save me time and I think it helps people be a little more polite.

Julie Groves: Yeah, I remember I send emails a lot of times by saying thanks a bunch, and the first time that it filled that in for me, I kind of freaked out. Because I was like, oh my gosh Big Brother is really watching. So yeah, that's really helpful that Andrea to think about people who say they've never used it, but yet they probably don't realize that they have. So, Joe, can you give us some less-than-positive examples of this type of AI? I mean, don't scare us too much. But you know, what are some of the less-than-positive uses?

Joe Storch: I mean, I think the documentary series of The Terminator, what's a good sense of what we might be facing. No, it's really hard to know because it is so new. One of the things that I think our URMIA colleagues and certainly the faculty that they're working with are thinking about is folks who are creating content in AI that we expect them either through their jobs or through their role as students to create out of their own brain so I could go to generative AI and say, write a history paper about this battle, this war, this event, this election, et cetera. And generative AI might do a pretty decent job and maybe I'll edit that and maybe I won't. But I think most people would argue that that is different from me sitting down with a blank document and starting to type and looking in primary sources and looking in secondary sources. So, there is a big question and one of the things that I think is scaring our higher education colleagues is whether generative AI is giving folks the opportunity to show work that they haven't actually done. 

There are all sorts of other scary things. One is that generative AI just makes stuff up. You can ask it and it feels so authoritative. It's coming from the Internet. It's coming from the future, but it's just making stuff up. There is a recent case where a lawyer asked generative AI ChatGPT specifically to cite a number of cases in a civil case involving an injury from an airline and ChatGPT very helpfully gave them the exact citations and the exact language they needed from cases that didn't exist. From judges that didn't exist, from judges from other circuits, they submitted it, and the sanctions motion is quite something true. Or the sanctions decision, I should say is quite something to read. So, when generative AI doesn't know something, it doesn't say oh so sorry, I just have no idea. It sometimes makes things up. So, to the extent that we are relying on that for important things, we may be really playing on false information or stuff that's made-up or things that are inaccurate and therefore things that can be harmful.

Jack Bernard: If I can add, I'll just say it's new for lots of people interacting with software or machines in this way or at this depth, and it feels a lot like interacting with a person. And so, I think people can be thrown off, right? Our sensibilities may be challenged by the fact that this is slightly different than what we're accustomed to. And so, I think people are worried about that. I know that I've certainly heard from lots of people around my university and other university worried that the sky is falling at this moment and they're worried because it's disruptive technology that is, it's going to change how we think about what we do in the classroom, how we interact with each other, how we do our work. It's also useful, I would say, to think about generative AI or AI in general as labor-saving, like most of our other inventions, and learning how to use labor-saving technologies, requires a period of adaptation. So, it isn't surprising that people are a little anxious. I would say they're probably a little more anxious than they need to be but it doesn't mean that there won't be disruption and change and you know maybe Joe's description of the Terminator is where we'll end up and I'll eat my words. But I think for the most part, we're still in the nascent stages of how we will play with these technologies.

Julie Groves: Andrea, did you have something to add? 

Andrea Stagg: I think what we should name because it's real is the fear of people's jobs being becoming obsolete. Right? And so, when Jack says it's labor-saving for some people, their labor, some of it may be completely replaced, outsourced in a way. And there's a real fear there, you know, people will say Oh robots will replace us all. Maybe, maybe not. Right? We saw with, you know, as someone with the legal background, there's not going to be generative AI writing, writing, legal briefs anytime soon. We saw the court was not a fan of that, but we've also gradually seen this technology taking certain work out already, right? When I use Microsoft Word and it has an editor and it tells me I'm at 93% and it makes some grammar suggestions for me to make it better. I am probably less likely to then run that by another person and take their time to read my work when Microsoft Word is showing me my grammar suggestions and it thinks I have too many commas which is probably true, but you need to pause, right? You have to pause. So, I think that there's a real fear there and I think that's something that the less positive. That's not scary. The world is ending, but if you're someone whose job is going to be impacted and your world is ending, right, that is a big impact on you. I truly believe that the future of work is AI plus humans, so we need to figure out how to work with it and how to understand it so that we can continue to, you know, support ourselves.

Julie Groves: Joe, do you want to add something?

Joe Storch: Yeah. So it may be a bit of death by 1000 cuts for folks where instead of being The Terminator, where at some point some, you know, robot takes over the world and starts, you know, killing people or the like it may be just slow harm and slow harm and slow harm. And what happens in those cases is the haves are able to rise above the level of the water and the have-nots find themselves slowly being drowned by the water. And as we think about higher education, we know that there are institutions that will have folks thinking about this, folks working on this access to the latest technology. You can purchase generative AI that's better than the generative AI that's free. That's better than other options out there. And what I worry about and what I think about working in equity is will generative AI be the equalizer for institutions of higher education where the least resource are able to use it? To be able to provide education and equity and programming, that is equivalent to the most resourced, or will the most resourced to least resource delta just get bigger and bigger and bigger? Because as the tide rises some boats rise and some boats are beneath.

Julie Groves: So, you know, while we're sort of on that subject, what significant changes will you- do you think we'll see in the higher ed landscape over the next few years because of AI? I mean, Andrea mentioned you know shifts in employee work, but what other Joe, what other things you think we might see?

Joe Storch: What other changes might we see? I think for those who are doing complicated work in insurance and complicated work and risk management tools may come online that automates some of them or provide a head start. Folks who are coming into risk management fresh from education, there is so much to learn and so much language, even to learn. Generative AI can be helpful with that. You all, Julie, you, and your colleagues, you create a lot of paperwork, right? And I'm not saying that in a bad way, but you do. How much of that paperwork can be automated? How much of that work needs to be manual and how much time can we save, or a better job can we do? Or more research applied, or a better understanding of what risks are because we have the assistance of that. I would not be surprised if, within the next five years, folks at URMIA are talking about artificial intelligence solutions that can help with assessing risks and applying for insurance, et cetera. I think we might see other things across different fields. And the interesting thing about this is it's going to be very separate and distinct across different offices and different places at the institution rather than, you know, Microsoft Word which has, you know, brought everyone up at the same level, or Excel or Gmail or the calculator or any of these other things, where everyone you know basically got the same changes and additions at the same time. Here we're going to see things that are very, very different across offices.

Jack Bernard: I'll just add something to that. And that is that I think we'll, as Andrea said, I think we'll see a nice combination of humans and generative AI working together. That is, there are different sets of skills generative AI is capable of responding to things that humans won't even notice. You know over thousands or 10s of thousands of individual data points the software will be able to do a better job of just constantly paying attention to data that it receives, where a human would get tired and has other needs and wants to be entertained and wants to go on vacation and all that kind of thing. So, I think this collaboration, these new tools will help us notice things that we might not otherwise have noticed. I don't think that at all eradicates what humans bring to the circumstances. It just enhances what we're able to do. That will come with complications for post-secondary institutions. How we teach, how we interact with our students, how we interact with our faculty and staff, how we interact with the general public, these things will all be influenced by new technologies, in the same way, that the automobile affects how we do all of those things. Where we locate our institutions are a function of how much we depend on the automobile technology and as we've migrated to the Internet, well now people can go to college without ever having set foot on a college camp. That's an extraordinary change. Now, maybe a writing tutor or a math tutor won't be a person. It'll be a tireless, constantly paying attention piece of software. I don't know what it will all look like I mean none of us are prognosticators. Here we're just looking at the world around us and trying to guess. But I think there are tremendous opportunities. Even though this adjustment and no human likes adjustment. We don't like change. We like things to be comfortable and reliable and predictable, and I think with these new technologies, we're going to find new opportunities and that's going to be disrupted.

Julie Groves: So, you touched on a little bit about the students' ability to have a, you know, 24/7 tutor. So, if you think about how AI could affect student work, I mean, Andrew, do you have any thoughts about student work going forward and AI and how those things you know may blend together?

Andrea Stagg: I mean academic dishonesty. It's academic dishonesty whether you use a robot or your roommate or a parent, you know so it just doesn't have a big impact for me in that way. Like when you really think about it, what's the difference? One of them's cheaper, right? Probably. I hope you pay your roommate if they write your paper for you. For generative AI, and I think, you know, we're going to have tools that will detect if you use AI, we're always you could always go back to blue books. You can use the law school model where you're in a closed-down environment for an in-person or exam or an exam that's on in a certain software where you're locked down from the rest of your computer. Of course, you could have another device. If you're in person, that's unlikely that you'll be able to use the other device, so there's always ways to lock it down. But I think if we're preparing students for the future, and we think the future is human plus robot. Think about maybe the time for us to update how we assess student work or how we assign student work and how we assess students’ progress and proficiency, right? What is proficiency and is writing a paper proficiency? What does that really show, especially when you know people aren't typically writing papers for their career, right? And if they are, they might be replaced by robots.

Julie Groves: Oh, that's a lot. Because, you know, I was an English major. I wrote a lot of papers. So to have someone do all that for me, I have a robot do that for me. And that would have been great.

Jack Bernard: It might free you up. I mean, when you think about that, right, it might free you up to do a different kind of thinking. I mean, remember, everybody's walking around with these little rectangles, and we pull them out and they're constantly distracting us. And well, we have instant access to all kinds of facts that we wouldn't have had before. And that's not to say that there aren't some fields and some people who need to know and abundance of facts. But for most of us, we could spend more time focusing on skills rather than fact mastery that's relevant to a field. For instance, take history. If we spend more time teaching students how to focus on thinking critically rather than just absorbing. All of these facts about a particular time. We would give them options to be better citizens in society. That is because they'd be able to better test with their own thinking, their own skepticism about what it is they're hearing from others, and taking that kind of approach. I think generative AI gives us an opportunity to reframe how we're going to engage with our students, faculty, staff, and society in general.

Joe Storch: When I think specifically about our URMIA Members, I can't think of a single one who doesn't have a long list of things that they would get to if they didn't have to do all the things they had to do. And so, when we think about enterprise risk management and we think about holistic risk management and we think about moving to the next level, what if we were able to automate or templatize or use generative AI to free up the risk managers’ time? Well, would we replace risk managers? Maybe, but more likely we would say actually go out and put down the pavement and go see additional risks that the computer will never be able to find and have more conversations and read more and think more and do more analysis that is not as possible when folks and I think having conversations, folks are just trying to keep up with their e-mail inbox, right? And this time of year, or maybe we just ended a time of year and with risk managers and you know, insurance folks like it was all they could do just to keep up. How much of that if we took off that plate, could they use for real analysis? And now we're actually making real strides to make things safer and then lowering our insurance costs and lowering claim costs and doing all the other things that folks want to do.

Julie Groves: So, Joe, while we're talking about risk managers, because a lot of them will be listening to this podcast. And you've pointed out some great potential aspects of AI for risk managers. Are there things about AI that might keep risk managers up at night?

Joe Storch: One of the things I think that should keep risk managers and especially in the more recent years where URMIA has had this commitment towards equity, is that generated AI is generally a reflection of us. AI doesn't make things up. It reads a lot, and it makes assumptions based on what it sees a lot and we as a people are imperfect and so generative AI doesn't necessarily smooth those flaws. It can accentuate those. Plus, Andrea and I did an experiment where we used ChatGPT to write some first drafts of documents under the Clery Act, timely warnings, and emergency notifications. And generative AI did a pretty good job because it read so many different timely warnings and emergency notifications. But then it added certain information in a robbery without being prompted, it added that the suspect was wearing a hoodie, and that's something that generative AI pulled from reading who knows how much. Now, we didn't say that the suspect was wearing a hoodie. Why did they add that? Of course, hoodie has a connotation, and that's literally the language that was used. It has a connotation, there's racial elements to it. It added information that we didn't ask it to, which could cause folks to have a very different view of the document, so generative AI, what scares me most is not necessarily the ones and zeros is that the ones and zeros reflect imperfect human beings and can accentuate them and add to that. So, when we think about its uses and risk management in insurance, we have to recognize that all of the flaws of humanity and all of the things that we do that are completely inconsistent and not efficient and not smart and not helpful and maybe harmful. Generative AI is gonna pick up all of that and the more it sees it, the more it's gonna accentuate it. And that could be something that we have to be really careful with as we use it for this.

Julie Groves: Jack, did you have something you wanted to add?

Jack Bernard: I don't know that anything should keep you up at night. I really don't think it, but that doesn't mean we shouldn't think deeply about it. And I think about the places where more mistakes are likely to happen. So, one context that strikes me is in the context of privacy. Ordinarily, when people interact with machines or their software. They're accustomed to just thinking it's staying in-house. When you're writing something in your Word document, you're not imagining that Microsoft is getting access to that information. But when you work with generative AI, say for example, ChatGPT, and you provide information, maybe detailed information to help you craft a letter, that information is being stored by a for-profit company. And so, I think it is and maybe even being incorporated into other analysis that that the generative AI does. And so, I think a place where we're going to have to develop new habits is when we're working with generative AI to not be disclosing things that we aren't already permitted to disclose. And I think that that will be hard for people. It's a change of thinking and I think so those kinds of problems are places where there could be collisions or mistakes that you know, result from our interacting with the technologies and we'll have to develop new habits. I generally don't think we're going to need a whole slew of new campus policies, but we might need to adjust our policies to remind people with just a few words that they're still bound by their privacy obligations, for instance, in interacting with these technologies.

Julie Groves: Andrea? 

Andrea Stagg: and nothing is as private as people think right, Jack? I mean, it's you know these. If you're involved in, you know, disciplinary cases at your institution. You know that well, it was a disappearing message. But this person took a screenshot and texted it to this other person or posted it, right? Well, what you searched for on your phone is your own private business until your search history is subpoenaed, right? So, it's all, yes. In the beginning, we were searching Google, we thought oh you can write. You can look for anything and no one will know, and then all of a sudden, you know the cops are coming to your house because you've been searching for something terrifying that seems like when it adds up, you're plotting something, so nothing’s as private as people think it is. And then there's always something new to correct for that. Incognito mode, vanishing whatever messages, but there's always something. There's always something, so it's- nothing’s private. I think people need to if anyone thinks that whatever they're doing right now is private they need to take a look and say, actually it's probably not. It's probably not to begin with even just the paper on your desk, unless you live with others or give anyone else access to your home. Right? In which case, then you got something else coming to you.

Julie Groves: So, Jack, as a faculty member, can you speak to the faculty who think that the, you know, this is the end times for the Academy and classes that AI represents, that the end times because it's just going to life as we know it is not going to be the same.

Jack Bernard: Sure, there are definitely a lot of faculty who are feeling that way, who are feeling as if what they know what's tried and true is challenged by this technology, it will undermine their ability to work with their students and for students to learn. I think they rightly anticipate that students will use these technologies. Because they're labor-saving. And if you could spend less time doing your work, wouldn't you want to do that? And I think there's a high percentage of students who will and are already availing themselves of these technologies to help them do their work. For some students, this creates new learning opportunities for other students that just gets to done, and I think faculty are deeply concerned about this. So, it's going to require some work to adjust in the same way, let's call it a weak analogy, but it's an analogy nonetheless that we had to respond to the omnipresence of calculators. When I took standardized tests to go to college back in the Dark Ages, they weren't allowing us to bring calculators in. When my children took standardized tests, it was encouraged that they bring in a calculator to take the test. So, there was an adjustment in expectations over time. 

The pain point is the transition and how we're going to transition and re-envision how it is that we help our students to learn. I think for many institutions, but certainly not all, we take students who are on a certain trajectory, and they go through our institutions without getting all that much from the institution. Yes, they get to put the name of their institution on the diploma that they hang in their office and that name has a tremendous amount or even that they have the diploma. But really, they were on that trajectory. I think the era of generative AI is going to inspire us to add more so that if they never hung up their diploma, they'd have been better for having gone to college. Better than they were ten years ago and hopefully 10 years from now we'll be using these technologies to sharpen the kind of work that we're doing with our students. But that transition time is going to be very hard. Faculty members who have worked, who have worked tremendously hard to create a rubric for education, we're going to have to revisit that. I think this will be something that's uncomfortable for lots of people.

Julie Groves: And so, Andrea, you kind of touched on the whole privacy thing and how you know privacy is a huge concern. And but as you've rightly pointed out, you know, there really isn't a lot that's private anymore. And so do you think there are any other types of risks in Higher Ed that are sort of overblown, but aren't really that big, you know, aren't that big of a deal sort of like you mentioned, privacy, everybody's concerned about it. But when you think about it, there's not a lot that's really private.

Andrea Stagg: I mean, sometimes I liken this to you when you think about it, especially in the student space and academic integrity or something like that. I liken it to when we have a smoking ban, right? OK, there's no smoking. And we talked about smoke and healthy air. Whatever it is, suddenly we're talking about we have E-cigarettes and vaping. And everyone's well, we need to have a new policy against E-cigarettes. Well, I think it counts as smoking. Maybe you just need to amend your policy about your smoke-free policy and just say this includes E-cigarettes. This includes vaping, which a lot of people did, but people were how are we going to square this with our smoke-free campus because there's no smoke. We don't need to panic. It's already there. Right? So, I just think the same thing about academic integrity, right? We're already here. We already have privacy concerns. What do we say, dance like no one's watching, write an e-mail, like everyone's gonna read it, right? So, we already have this concern, and it just continues. I think that we're going to find incredible efficiency and maybe I'm just being optimistic, but I think about what do people do now when they write want to write a new policy. They say, well, who are our peer schools? And they look around and say, OK, these are our peer schools. Do they have something like that and what does that look like? And which one? Do I like the most and then I kind of make a Frankenpolicy of all the pieces that I like the most that makes sense with our values in our template? What if you could just ask AI to write it for you in the style of your policies, which are all on your website and it's read it, right? It's already read that.

Is that it? It just is so. fast and you then you review it, and you fix it, and you tweak it. It's just so fast, so I'm really hoping that people can find helpful uses. But what we can't have them do is pour their in, you know, personal and confidential information protected for, for protected information, HIPAA protected information and then employment information that's protected from disclosure to create work. So, I think they'll there will need to be guidelines around, you know it's not new rules. It's just guidelines on how to use these tools within the rules that already exist. That you already have a data privacy policy. What does that mean and how do we use these tools? I think people won't understand, you know how these new tools can store data so which I don't expect people to be reading all the fine print in the terms of service. So, it's always helpful whenever there's a new tool that is rolled out to your community, whether it's generally or you know specifically by the institution, a new tool that they're using, letting them know what it's for and what it's not and what to include and what not, and how to make it useful for them. I can imagine, you know, sessions that institutions will offer them like how to help AI, you know, make your job easier. Let's figure that out. It's very tailored. It's a tailored solution for a particular person's job description and expectations, so you know, I think that we need to provide that support to people, the same way we would if we switch, you know, platforms and we're rolling out a new, you know, a new portal. We want to make sure everyone knows how to use it.

Julie Groves: I think Frankenpolicy is my takeaway word from this podcast. I love that. So, Joe, let me ask you, what should people who are listening or URMIA members be thinking about as far as the intersection of AI and students or employees with disabilities?

Joe Storch: Yeah, this is something we've thought about quite a bit. Andrea mentioned earlier that there is a reaction among some to say well, let's go back to blue books, and let's go back to- let's lock things down and two things happen when you lock things down. One is you show students that you don't trust them, and some people may listen to that and say well-earned distrust, and other people may say let me have pause as to whether we distrust them. But the second thing is that we have folks with both visible disabilities and hidden disabilities, apparent disabilities and hidden disabilities who have really benefited from some of these technological changes that we've had over the last half decade, last decade. There are students who are able to participate in education and are able to participate in program. And student programming, who would not have been able to participate were it not for some of these educational changes. And frankly, were it not for some of the additional flexibility that we've seen during COVID where things that were unthinkable flexibility-wise became de rigueur flexibility-wise, so to the extent that we have a reaction to generative AI, which is lock them down, lock them out and make sure that we bring people in and everybody has to take tests in paper you know with pencil and in blue books and we secure it. 

We are going to be pushing some folks outside of education who could have achieved and succeeded really, really well and successfully with just a little bit of the flexibility. Some of those folks will come forward to the appropriate office for the faculty member and say I'd like to disclose a disability I'd like to seek accommodations. We engage in the interactive process. We do all the things and other books, wouldn't they would just exit. They don't want to talk about it. They don't have an official diagnosis, they can't afford some of the medical things for an official diagnosis or for any other reason, including it's just awkward to talk to people about this. To your faculty member, to the office, for some people. And so, we need to be thoughtful about how do we keep these students in and succeeding, and make sure we don't lock things down so much that we push some folks outside of the gates.

Julie Groves: Yeah, I think that's very helpful. You know, URMIA has five strategic goals, and our fifth strategic goal is to connect URMIA with the future to ensure sustainability for the association. And so, as we navigate through all this new technology and work to provide our members with information about AI. Do you all have any resources you could recommend that we share with our members, and we could potentially link some of these in our show notes? Or Jack, are there some potentially good sites or some good articles that people could read about this? If someone wanted to dip their toe in the AI pool, is there a website that's kind of tame that they could visit to see how that works? You know, because I suspect we have a lot of people out there who to Andrea's earlier point have used AI, although they say they haven't and so is there something we could some way we could direct our members?

Jack Bernard: Yeah, I don't know if there's one-stop shopping out there. I mean, maybe Andrea or Joe know of really good resources, but I find the resources are changing a lot as we come to understand how these technologies are working. And I do think it's going to take a little while before there is a convenient single location to go to. Things are- things are definitely changing quickly. What I would say though along the lines of embracing the future is just because it's new, don't be afraid of it. Like to take small bites, introduce yourself to the subject, play around with the technologies, get a sense of the ambit of its responses, when you pose questions, enjoy it, frolic in it a little bit. Just imagine being a child in a pool and have a good time in there. Get a sense of what's possible. And that will give URMIA members a chance to think about where institutions might have opportunities to mitigate risk. And as URMIA members know, you can't. There's always a balance. You can't mitigate risk, lock things down so much so that people can't participate equitably, and that they aren't able to engage in the pragmatic experiences of being at institutions. So there's always a balance, and I think that's how it's going to be in the context of AI and generative AI, and I think just coming to know it not making it a distant other thing, but trying to experience it yourself is a great strategy.

Andrea Stagg: We had a lot of trouble. We had a lot of trouble making a list of links of articles for a talk we did a few months ago, about generative AI because every day there was a new article. And so, it didn't make the older articles, you know, less helpful. It's just that there was always something. So, we ended up with pages of links. And so, the best resource is the newspaper every day, every weekend you will see in the newspaper a great article about AI. I saw a great one a few weeks ago, about a company that. Is terrified of AI. They're an AI company, and they're the ones who are the most doomsday. And so, they're using that fear and their knowledge because they understand it so well to try to create boundaries and parameters that are enforceable. And basically, training AI to train other AI to be better, right? To sort of avoid some of these doomsday scenarios and these big risks. So, there's just every day there's something new. So, I think reading the newspaper is the best way is really the best way to stay current. And the journalists are doing a great job, I think, talking to all different experts and comparing the tools and you know and talking about the fears that even the people who create these tools and work on them are having and even some regrets that some of them are having people who've created some of these tools. So, it's really interesting to see and it's just you see one every single week will be a good article.

Joe Storch: And for me, the best resource is just jumping in, and I would go to ChatGPT or one of the other ones, although ChatGPT is sort of the McDonald's of generative AI at the moment. And I would start with things that are whimsical, Andrea writes nursery rhymes for her kids. I've used it to do other sort of whimsical, funny things, you know, explain enterprise, enterprise risk management to the tune of Complicated by Avril Lavigne. Right? These are things that are not helpful, not harmful, a little bit whimsical and a little bit gets you to see how the technology works. It is great with writing poetry, writing stories. Jack uses it to write new additional chapters of books in certain styles. And I think there is no better way to learn it than jumping in there with low risk, low reward, whimsical things. And that's the best way to learn and understand this. It is critical, I would say, for risk managers to really understand this because there was a lot of talk about it last academic year there's going to be a lot of talk about it this year and there are going to be some faculty who are saying how do we shut this down? And there gonna be others who say what are the risks? And there's going to be questions from cabinet. And so, I would say our risk managers being able to have a really thoughtful approach to it and having had an understanding with the primary documents really will serve well, in having these conversations, which is why I'm glad we had this podcast here.

Julie Groves: Well before we started recording today. I asked you all if we would be hopeful at the end of this podcast. Or if we would be worried at the end of the podcast. I do feel that we are hopeful. I think you've done a great job, you know, kind of helping us understand what some of the pitfalls may be, but that there is going to be a lot of benefit to using AI. And I guess it's just like anything, you know, as long as you use it. You know, there are benefits. So, before we wrap up, do you all have any final thoughts to add to the topic? I mean, we could talk about this. For probably two or three more hours so, but any quick thoughts to add while we wrap up?

Jack Bernard: OK, I'll never refuse an opportunity to say more. I think as you read a lot of articles, you will get a breadth of opinions about whether AI is helpful, how it's helpful, how we can use it, and what you should be concerned about. And it's good to be a broad consumer. Don't just believe the first thing you read. Explore lots of opportunities to think deeply about this subject. I think like lots of other tools, these are going to be tools that we use on a regular basis throughout the rest of our lives for many of us, and thinking about how best to use that tool will require some investment on our part to learn about it. I have never used generative AI as part of my professional work, yet it's something I have to understand, but it's not something I've used to do any writing. But there will come a day I'm imagining there's going to come a day. Or I need to produce something, maybe an image for a slide or something like that, where I will go to it. So far, I haven't done it. I guess I'm old-fashioned in my proclivities, but I know the day will come and I will note it. I will reach out to Joe and Andrea and tell them today was the day that I succumbed, and I started, you know, using the indoor plumbing rather than going out to the outhouse. And I expect that day will come sooner than later.

Julie Groves: I think that day will probably come for all of us sooner than later. So well, thank you all so much for being here on the podcast. I mean, I, like I said, we could keep talking about. This and as this subject continues to evolve, we may have you back for another discussion, but this has been very, very helpful. So, thank you again for being here today and this wraps another edition of URMIA matters.

Narrator: You've been listening to URMIA matters. You can find more information about URMIA at www.urmia.org. For more information about this episode, check out the show notes available for URMIA members in the URMIA Network library.