T-Squared: a Teaching and Technology Podcast

Season 3, Episode 2: Critical Ethics and AI, with Eric Covey

Join the conversation!

This episode transcript has been enhanced with social annotation tools from Hypothesis. By creating a free account you can highlight text in this transcript and add your own comments or questions. Other users will be able to see your annotations and respond in kind.

For more information, please visit they Hypothesis Quick Start guide.

Episode Transcript

Matthew Roberts

Hi, Jacob.

Jacob Fortman

Hey. Nice to be here.

Matthew Roberts

Yeah, good to be back. So, at home, you don't know this, but we've had a little bit of a lapse between the last time we got together and talk about this little thing called life that, you know, makes things busy. So. But we are here, and I'm happy to announce that we have a guest today, like to introduce Dr. Eric Covey from Grand Valley State University's Department of History. I could, you know, rattle on about stuff. But, Eric, how about you just say a little bit about your background and then we can start talking about the subject du jour. But I'm interested in you as a person, too.

Eric Covery

Oh, okay. Well, hello, Matt and Jacob. Thank you for inviting me to the podcast today. I'm really excited to be here because I know there have already been, you know, a lot of discussions that the three of us have been present out on campus around generative AI. So, to introduce myself.

I'm a visiting faculty member, as you said, in the Department of History, This is my third year at Grand Valley. Before this, I was a visiting faculty member at Texas A&M in Corpus Christi, where I taught in the humanities. Before that, I spent a year as a Fulbright scholar in Nigeria, teaching at a federal university there in the capital, Abuja, and doing research for a new book project that's currently in the works. Before that, I taught at Miami University in Ohio in Global and Intercultural Studies. And I before that I was a graduate student at the University of Texas at Austin in American Studies. Although now most of my work is in kind of global, global histories of the present.

Matthew Roberts

Excellent. Yeah. Eric and I first really started talking about this subject when we were both at the Lilly Conference on college and University teaching up in Traverse City back in October of 2023. Traverse City is always a lovely place to be in the fall, so but we sort of showed up at the same sessions related to artificial intelligence and sort of started talking about this. I said, "Hey, let's do this little podcast thing."" And so on that list of academic background. I didn't see artificial intelligence there. So could you say a little bit, Eric, just about how you started getting interested in the topic of AI and generative AI?

Eric Covery

Yeah, I kind of feel like an unusual choice for the podcast since I don't really have any kind of background even in the history of technology. Although back in the late nineties, I spent four years working at a company called EarthLink, an Internet service provider, doing technical support, you know, for PCs and Macintoshes back in the dark days of dial up. So I think I probably always had an inherent kind of interest in technology and, you know, maybe like some of the stuff we talk about today. Kind of an interesting ending to that career was that my department and in fact, almost the entire company's support services were outsourced to Hyderabad in India. So I hope we get to talk a little bit about outsourcing in general today.

But in terms of why I'm interested in generative, I think I'm just kind of interested in technology in the classroom in general. Here at Gvsu. In my world history classes, students use the university's ArcGIS server to create story maps. So I'm always thinking about technology in the classroom. And, you know, then I couldn't help but notice when ChatGPT 3.5 burst onto the scene a little over a year ago, and I think because of my teaching style, you know, I'm someone who uses upgrading in the classroom and I don't use that kind of traditional lecture model.

So right away I felt a kind of reaction in terms of the immediacy with which perhaps many other faculty members responded that this as just a kind of plagiarism tool. And in fact, in terms of what I do in the classroom with teaching students about generative AI and the history of AI and the environmental, labor and consequences for the global South, my initial kind of impetus for that is I walked into a classroom back in Mackinac hall one day in, oh, January, the beginning of January last year, and whoever was teaching in the classroom before me had scrawled in large red letters in the class on the whiteboard chat GPT equals automatic F, and I knew at that point that I had to, you know, I had to introduce students to this and talk about, you know, talk about it, I think in a in a more critical way.

And, you know, I don't think there's any required qualifications to be a guest on our podcast. We're small enough that we welcome everyone. But I think the perspective that Jacob and I have been taking is really that what we need about AI is a broad conversation and one that hears from all sorts of voices.

Matthew Roberts

So the thing that I mean, I noticed immediately when we were talking previously was that you you were more than willing to to take up the question of the ethical issues that are involved. And when so many of the news stories and this is still true today, even a year after ChatGPT sort of became public knowledge, that when the news stories are so much about the how your life will be better, how this company is going to make so much more profit because of this. I think it's important that we bring into the conversation the fact that there are there are real questions that we should be asking and there are real consequences of the choices that we're making about how to adopt generative AI in particular. But I mean, any technology in general.

Eric Covery

Yeah. And I think, you know, immediately my kind of, you know, desire to introduce and talk about ChatGPT and AI large language models in the classroom went beyond just the ethics of using it, the idea that it's a plagiarism tool. And I was, you know, more interested in this kind of, you know, what was seen at the time and maybe still all though less so, is this sudden technical technological revolution, right? And you know, so I wanted to talk with students about all of those kind of different facets of this technology.

And I was also prompted— So in the winter, you know, January of 2023, my initial kind of workshop with students just in my 100 level classes was to have them read, you know, I broke them into four groups and we read short articles that had come out at that time about chat GPT. And again, just to kind of I think I actually gave them maybe ten articles to select fromover the past few months and we talked about it a little bit in the classroom, you know, and I subsequently heard from them, learned that really no other professors were talking about this in the classroom beyond the fact that don't use it, don't use it.

And then at the beginning of fall semester, you know, just kind of four or five, six months ago, I, you know, turned this into a full kind of week or it was an abbreviated week. You know, we had a holiday, so I turned it into two full days of workshop. And I asked students, you know, what have your other professors been telling you about our what do you know about it? And they, you know, they knew nothing. I mean, headlines essentially. Right? And of course, some of them had used it and played with it, you know, But I thought it was important to really kind of talk about it. And, you know, at that point by fall. Perhaps that kind of pervasive attitude was no longer it's a plagiarism machine. But professors were saying, just don't use it, it won't work. And that was it. That was the extent of kind of what they were learning about this technology in the educational setting.

So then I kind of came up with, you know, a version of the workshop that's pretty close to what I'm doing now with a few articles changed around that focused on the environmental ethics of this right kind of carbon output and water consumption. And then also the labor, you know, especially kind of labor from the global south of was being used to kind of scrub these large language models of offensive content. And then also some of the kind of transparency and business issues around companies like Openai, but just AI models in general.

Jacob Fortman

So Eric, I really do want to dive into perhaps later more about, you know, the ethical stances you're taking in regards to the environmental impact of this technology and, you know, the impact it's having on outsourcing labor. But before I dive into like those more fine-tuned questions about, you know, how you're thinking about all that and the relationship that higher education has to all those social problems,

I'm curious, you know, I'm really excited by the fact that you're not taking a purely utilitarian stance with AI. You're not you're not just treating this with your students like, as this tool, Like, here's how you use it or don't use it. Here's how to use it and not cheat that you're really diving into like the more AI literacy aspect. And so I'm curious like, how have your students received this? Have they been has there been a real thirst for this? Have they received it positively?

Eric Covery

I think so, yes. So I'm actually this semester I'm teaching the workshop next week. So I'm curious how students will react this semester. But last semester, for the most part, they were excited to learn anything about it. They didn't know anything about it. I have them write several reflections, you know, one of the kind of beginning of the week and then one at the end of the week, you know, what do they know to begin with about AI in quotation marks in general, large language models, ChatGPT. And then what do they learn over the course of the week? And they're largely positive reaction. You're happy just to kind of learn about this thing because it's not being talked about, but it's obviously out there. You know, I don't know. Perhaps some of them also have anxieties about how this will affect their future career opportunities.

But for me, one of the things that I see most, and perhaps also, you know, what prompted me to focus on, especially the environmental stuff, was that I think there's probably agreement among many instructors, college instructors, that this kind of generation of students, at least in terms of traditional students, you know, 18 to 22, are very concerned about their future prospects in terms of the environment. Right? You know, more than other generations they're aware of and feel anxiety about global warming, about its kind of effects on just the future in general, but certainly their lives.

So a lot of them were shocked and horrified. But of course, ahhh, you know, thankful to be thinking about these technologies and how they affect the environment. So that's the I think the response has largely been positive.

Jacob Fortman

Yeah, that's really interesting because it makes me think about how, you know, the environmental climate change and the struggles that we're having with environmental policies right now. It's on the top of everybody's minds as a particularly at the top of the mind for younger college students. And by bringing this into the forefront in relation to the relation to AI, that's I think it's a really good way of connecting with students and making this feel. I think the curriculum is relevant to them. I'm also curious about the specific handout material that you kind of shared with us beforehand. The it was called the Techno Skeptic handout.

I think it's a great resource with a few bullet points, kind of sharing ways of kind of critically examining the technology. I'm curious, you know, how students reacted to this handout. Why did you choose to adopt it?

Eric Covery

I think it's from Neil Patterson. Is that who it's adapted from?

Matthew Roberts

So I'll make sure that this is linked in the show notes. But Neil Postman is the one who got credit done at the bottom.

Eric Covery

You know, one of the things that I didn't want to focus solely right on, just ChatGPT again, it's ChatGPT, but then working kind of backward is thinking about large language models, about kind of AI in general, both generative, you know, and kind of general intelligence. But then this is all also about technology, you know, and I don't want to come to the classroom and give students, you know, my perspective on what it is of my perspective on technology, but a perspective on technology that it's inherently bad, destructive or damaging.

But I want them to think about, you know, being skeptical and relating critically with technologies in general. You know, this runs from ChatGPT, for instance, to something like Dal-E or whatever. But then also, you know, to kind of blackboard to, you know, to the technologies on their phone. And I mean, I think, you know, students probably respond to the you know, to the lesson plans to the kind of workshops that week. And then perhaps even there kind of response to that hand-out on techno skepticism is also conditioned by the fact that so many of them think of technology in some ways as being destructive in their own lives.

Right. You know, I have students constantly who are telling me, you know, it's a problem, that I'm in bed every night and I can't fall asleep and I'm on my phone and they're aware of this, you know, the kind of techno skepticism hand-out deals with kind of, you know, some of that stuff. Like, it's not the technology itself that's the problem. It's our kind of response to it, how we relate to it. There are ways for us to kind of mediate that. Just like these large language models, their kind of labor, their environmental impacts can be mediated in ways. There are ways that we can, you know, actively, proactively respond to technologies rather than kind of being reactive or simply being, you know, I don't want to use the word enslaved by. Right. But like essentially just kind of being beholden to the technology companies that are giving us the technologies and we're just stuck with it.

Matthew Roberts

So it's interesting that you you mentioned that you mentioned students and their their willingness to share that they do find that they're they're bound to their technology in a way. One of the things that I do in my teaching is much to some of my students' consternation. I try to get them thinking about learning itself. We do some lots of readings and some assignments related to just metacognition and what a lot of the research about learning science has shown. So we talk about things like sleep and exercise and, you know, you get the you get the response where students will admit, well, yeah, I don't get enough sleep. And I know that I probably shouldn't be, you know, surfing Instagram right before bed in whatever. And the fact is, yeah, the question is what then makes the jump from recognizing to actually acting on that?

Because I'll admit personally, I've known about the importance of sleep, at least from a learning science perspective, for more than ten years, and I still don't get near enough sleep. And part of that is by conscious—conscious not choice? Shall we say that? I still stay up later than I should.

Eric Covery

Yeah, I mean, this is something that psychologists and others have been interested in for so long. How do we change behavior? You know, and as individuals, how do we change our own behavior? So, you know, I mentioned that my courses are ungraded. And one of the things that I do as part of that is at the beginning of the semester, you know, we have things we're going to do, work that the students have to complete in the course. But at the beginning I had them set learning objectives for themselves and I give them a large sample of previous learning objectives from students. And among the learning objectives, things I talk about in the class are one of your learning objectives might be getting enough sleep or eating healthy this semester or getting involved with kind of social events on campus. And you know, to them the kind of like what they would think of traditionally as learning objectives would be, you know, getting an A in the course or doing well on a quiz, you know, But I tell them and I think, like you said, being meta about teaching and learning is really important in the classroom.

So, you know, with these learning objectives, I say, you know, what's the kind of behavior that you want to change? You know, do you want to get better at note taking? Do you want to sleep at night? Do you want to eat kind of healthier? And why do you want to do that? You know, it's not just like, well, I shouldn't be on my phone at 3 a.m. Well, why shouldn't you be? Can you articulate that? And then finally, the most important part. And I think the thing that's most challenging for so many of them is the actual behavior change. How do you do that? You know, and for students who've had trouble getting work done or kind of organizing their lives, you know, I often thought you should have a planner. Maybe it's maybe it's just your kind of Google calendar. I tell them Blackboard will integrate into your Google or your iCalendar. You know, maybe you need a paper calendar. I'm always shocked by the number of people, you know, especially who are, you know, 30 years younger than me who are like they get that kind of calendar, that hand-held, you know, paper calendar, that physical thing, and they love it. This really actually helps them with life.

Now, the kind of it's harder to get off your phone at night. But I think one of the steps for a lot of students is like in my world history class, they have to read a book and we have, you know, handouts on how hard and how hard reading is. The Reading Center comes over to talk about hacking reading. And one of the kind of best things to make you a better reader is to turn your phone upside down or to hide it and not look at your phone. And I tell them my own experience, you know, if I'm reading, if I'm giving feedback on student papers, you know, it isn't just I learned I lose 10 seconds when I flip the phone over it, like significantly slows me down for a long time after that. There's a lot of research on this, right. You know, I think being conscious of it and making some kind of plan to avoid that—the detrimental effects of technology is a good first step, at least.

Jacob Fortman

You know, when you mentioned Matt about the need for sleep and kind of how it's in. And then you mentioned that it's hard to change behavior even when we know things like eating sleep or we know things about learning that are good for us. It's so hard to change that behavior. And it just reminded me of the fact that there's so many traditions and histories and material infrastructure within higher education that's dedicated to bad pedagogy and bad learning that it can be difficult to fight against and go against the grain in that way. So for instance, when I was an undergrad, my undergrad sponsored Monster Energy to come in for exam week and they would do exam cramps and Monster Energy would hand out free Monster Energy drinks starting at 9 p.m. for everybody the night before their exams. And it's just it's just mind blowing to me how bad of an idea that was for for students and their learning. But you know it was it was a fun event. People came out and had, you know, monster energy drinks and hung out and chatted and... So on the surface, it may have been a great, great event for some people, but for pedagogy and and helping us understand how and why we learn is just it's just so against that, what the research says we should be doing.

Matthew Roberts

I think that example is is proof or a reminder of two things. One, academic institutions frequently have goals that are not necessarily the same as promoting learning. And the second thing and I thought this for a long time, which is that sometimes based upon what we know now about how the brain operates and how learning happens, it's amazing we accomplish anything at all in education, doing so many things, the opposite of how it's supposed to be done, which really is to say then, I mean, think about how much better we could be if we were actually doing things the right way.

I was driving into campus earlier today and I was listening to the radio and on the one day program, this is a repeat from the past. They're interviewing an author whose name I do not remember at the moment, so I will not butcher it, though. I will link it in the show notes and interviewing this author and coauthor on a book they've written about self care, which there is a connection here. First of all, I realized just listening to the conversation, I should probably read the book and do a better job of self care for myself. I say, knowing full well how important self-care is, but not being prone to some of those harder choices.

But the point that I want to pull out of that is that one of the things the author was talking about in the context of how youget people to really engage in authentic as opposed to inauthentic self-care is you get them to think about their values. And part of what they were emphasizing was that coming to a position of true self-care is not about, you know, the little acts like the lotion you use before bedtime that you consider a luxury or whatever, or, you know, a day away at the spa or something. But it's about the changes to the decisions you make throughout your day. And the way to really do that is by encouraging people to think about their values. And then for a lot of people, they might not have thought about that very much, but then encouraging them to to make decisions and conscious choices that flow out of those values.

So I guess this is something that we can, you know, bring back into the classroom, I suppose, because, you know, it sounds all touchy feely to say that college is something about self-discovery, yadda, yadda. But the fact is there is a truth to it, right? Because as students learn more about who they are as individuals, they're in a unique position to to start making choices that are beneficial to them, their health, their learning process, maybe to the broader world based upon then what they consider to be important. And, you know, I do a similar assignment to what you were talking about here with my students, thinking about objectives and goals and and success.

But, you know, maybe I need to start asking them, you know, what are your core values? What's important to you? One of the things that I started asking last semester is, apart from what grade do you want to get in the class? I started asking them, How are you going to know you've been successful? And I tried to you know, I tried to explain—being successful doesn't necessarily mean getting the A right. The easiest example to throw out is, you know, well, for some of you, you don't want to be in this class. For some of you, you're worried you're going to be bored stiff or something maybe. And you can tell me this. And I have had students be honest, say like, you know, I just want to not die of boredom. Right? Or given everything that is on my plate this semester, I want to just survive.

And then what I did at the end of last semester when I when I met them over Zoom, we do a last year, I met each one and I'm like, well, you know, I asked you, how would you know that you've been successful? So look back now and tell me, were you successful? Because I figured in my mind recently that as academics, you know, we've generally benefited from the the instructional system that. We came through. Some of us are trying not to propagate the worst parts of it, but what we consider to be a success is not always what our students are really focusing on. So hey, why not listen to that?

Eric Covery

And I think that's you know, I said that it's challenging for them to come up with a plan. You know, one of the things the prompt for the learning objectives also does is to ask them to develop, you know, objectives that are concrete and measurable. And they definitely and, you know, even I have problems with thinking about, you know, how do how do you measure measure success in something that is not, you know, without a grade especially. Right. But how do you measure success in getting better at taking notes or, you know, participating in class? You know, maybe that one for some students, I'm going to raise my hand and talk at least once per session, you know, But other things, I think, are more difficult for them, I mean, for people in general, but also for them certainly to kind of measure.

Jacob Fortman

I really love the kind of meta dialog on learning and how do we support learning and thinking about learning while we're in the college classroom. And I think that applies perfectly well to technology and artificial intelligence as well, because it allows us to think about, well, you know, how is the design of this, you know, AI Chat bot or how is the design of this LMS facilitating my learning, knowing what I know about my goals and knowing what I know about the science of learning. To what extent does the design of this technology hinder or facilitate what I want to get out of this classroom?

I think you could also look at the way that technology is marketed. I see a lot of narratives in ed tech companies marketing, you know, highly efficient learning processes and and optimizing learning processes as if the learning process could, you know, is this linear process that can always be increasingly more efficient, increasingly, you know, fine tuned and optimized.

Anyways, I love that the dialog on this kind of meta perspective on learning and how we facilitate this, this kind of understanding in the classrooms. I think that one of the tricky situations you find yourself in when you start to do this, particularly with with AI and technology, is that on one hand, you know, educators in higher education institutions, want to prepare students to be, you know, for the future? They want to prepare people to be competent and future oriented tools and skills. And so there is a big push among among admin to to adopt AI so that we're better preparing our students. But at the same time, when we take this critical stance, we see a lot of detrimental effects of these AI tools. As you've acknowledged, it's has issues with environmental impact because of the energy intensive resources this is drawing on. It's outsourcing labor to the global South. And so there's a lot of ethical and social issues that come with adopting these technologies.

In your perspective, Eric, like how what's the best way to kind of hold both of these in the same hand, and how do we best navigate this situation where we want to prepare studentsto use AI and we also want to prepare them to be critically engaged with it?

Eric Covery

Yeah, absolutely. That's a good question. I think a challenging one to answer in some ways. So there maybe there's two things right away at work. Is that so first of all, I think students have to be able to identify their values vis a vis the university's values also. Right. So the university, you know, at least for 50 years now, you know, this kind of trend begins in West Michigan in some ways, the university often presents itself as a gateway to job opportunities, and that was not always the case for universities. So I think students should be aware of, even if they've come here in pursuit of a credential, a degree that they believe will give them, you know, increased learning opportunities, a better life, I think. Yes, of course, college still pays, right?

But at the same time, you know, I always want to encourage students to think about the university as more than that. Right. As a kind of like space to continue developing as an individual, as a space to kind of actually learn things. And part of that is, as you're both obviously getting at here, is the idea of figuring out who you are as an individual. Like what are your individual values, How are they different than the university's values? And there are certainly, you know, today, for instance, there will be large protests on campus that will demonstrate that there are a lot of students here who call into question the university's values around certain social issues.

So then the second thing beyond the kind of, you know, tensions between the individual values and the university values are also the relationship. I think the larger relationship between, you know, the individual, the student here at the university and people in the world at large. Right. So they're certainly concerned about kind of environmental ethics. But some of the stuff that I have them reading this semester, so they have four articles to read for next Wednesday in class, and that's split among the four of them. So several of them are focused on environmental ethics. And one of those kind of hits at differential access to water in different parts of the world. Water consumption. You know, why you know, are some people paying? So much for water. You know, here in the U.S., we tend to think of water as, you know, fairly cheap or easily kind of accessible to some degree public commodity. But it's privatized in a lot of other places that's related to these large language models.

And then I have, you know, two other kind of readings beyond the two on environment. I have one that's focused on Kenyan laborers, Kenyan, Ugandan and Indian laborers, but mostly focused on Kenyan laborers who are being paid, you know, $2 an hour to scrub really objectionable, violent, sexually violent content from these AI models. So they're thinking some students should be thinking about the relationship with people who are doing jobs elsewhere. And then the final kind of piece is, you know, essentially about kind of larger transparency and who's creating these business models. Again, is AI a public, a private good, kind of semipublic, is open source AI actually a public good. And also the kind of then relationship of these students, these individuals to policymakers, to corporate executives, to investors and so forth.

Jacob Fortman

Yeah, it's you know, I just think that like within this conversation of AI and how we adopt it, I mean, we see so much about like, what are we here in education, in higher education to do? I mean, if we're free here, if we're purely here for job centered training, you know, then I think it's difficult to have these types of critical conversations. We were talking about environmental ethics or the outsourcing of labor. What does it mean to have a public versus private commodity and open AI? And what's that mean? A lot of these conversations, I feel like get devalorized within the neoliberal institution where we are only focused on preparing, preparing people to be competent Python programmers or data scientists.

And so I really appreciate that perspective where, you know, you can continue to facilitate these conversations. And as you said earlier, I mean, students are thirsty for this and they appreciate this. And, you know, our environmental crisis is top of mind for a lot of people. And so it's a very humanizing experience to to hear about ChatGPT and the way that it's contributing towards the environmental crisis and the ways that we can kind of help to prevent that to some extent.

Matthew Roberts

We've been talking around some of these these ethical issues a little bit.

I want to come back to and just for the listener at home in Podlandia who hasn't read about this as much as others, just help them understand some of the basics. But I want to just bounce off of this point about job training here for a moment, because one of the things that is so unique to me about this generative AI moment is the way in which inside of the academy, so many people have so quickly lept— and I suppose there's two versions of how we could label as position. Maybe they're separate, but I think it ends up being the same one. The pessimistic give, the pessimistic take on it is we need to give up. There's no fighting it. And you can see that message everywhere. And I mean, sometimes it's it's labeled slightly less pessimistic, you know, as in it's not going away.

There's a more constructive or perhaps a more self deceptive version of that argument, which is we need to train our students to be able to use it. I think that's, you know, the direction that you were talking a moment ago, Jacob, And it gets into the deep questions about what the goal of our classrooms, what the goals of the learning process are.

But what is so unique to me is how quickly that perspective has become. I don't know if we should say it's dominant, but it's so common in education and more so than any other technology I can think of. Right? I mean, there's still lots of people trying to bring augmented reality to changing education. It hasn't happened. Not even everybody is jumping on board. There's so many people who have been trying to move us towards a system focused more on micro credentials, which is not just a technological issue, but it's still not taking over. Dare we remember ten plus years ago MOOCs were going to change everything? And guess what, No, not really.

We're still largely where we're at now, obviously mean the moment that and I think we're all of the right age to think about this. But the next most relevant thing is the whole rise in general of things like the Internet in general, and then things like in Google Search or Wikipedia as part of sort of like the the after shadow of that, the question of how is that going to change what educational practice is like. And I'm wondering why the response that we're seeing now is not as it was before. I mean, obviously, we're all Googling. We're all using electronic resources. In some ways, we could say it has been victorious. Right. We're not looking in card catalogs anymore or that. But why is there a change now?

And I don't know. Here's a couple of possibilities. I don't know what you think. One is education, both as an institution and as individuals, have many of us just stopped and said we fought too much before. We need to we need to be more proactive now. Is it a greater emphasis on the role of job preparation than it was, you know, 20 or 25 years ago? Is it the fact that many of us who are in the academy now kind of as that that middle generation, were ones who were there at, you know, the previous turn with the rise of the Internet and we're just responding differently. I don't know. I'm curious because, you know, there are other times where in theory, lots of people could make the argument that this X oh, I shouldn't use X —because that actually mean something in the world of technology now. That this thing is going to be so influential, everybody needs to know about it for their job, but it hasn't taken over, right?

Why is there suddenly you know, I can imagine some disciplines where it makes perfect sense for this conversation to happen. Right. Like, you know, in computer science, I mean, it makes perfect sense. But but even across the academy, people are talking about we don't necessarily know what job you're going to have, but darn it, AI is going to be part of your life, so we better train you now. What's making the difference? What is it about this moment?

Eric Covery

I think it's a combination of all those things you listed. Right. And, you know, like Jacob mentioned a few minutes ago, the neoliberal university, right? I mean, the neoliberal university has come into being over 50 years. And, you know, I can't speak for people in the humanities or social sciences at large, but I think there is there's a survival instinct that's kicked in a lot of people. Right, with the declining kind of numbers and tenure track faculty with the elimination of an entire kind of humanities departments, majors, disciplines, the idea that there might be this kind of world transforming technology, that you are kind of hands off, skeptical, or either refuse to adopt might further, you know, put a kind of target on you. Right? You may look more like a dinosaur and the university might make you go extinct. So perhaps perhaps that, you know, and all those kind of factors you listed, I think are kind of coming together.

And then I mean, we're also in you know, we've talked about the kind of environmental crisis, right, that students are so responsive to. But those students are also aware, and as we all are, of a larger kind of moment of social and political crisis that we seem to be living through, managing through here in the United States right now. Right.

Jacob Fortman

Yeah. Eric, I really like that point. You know, a condition of the neoliberal university is the increasing precariousness that we're all faced with, right? I mean, the increasing amount of of adjunct and temporary lecturers on temporary contracts, you know, the outsourcing of teaching to MOOCs or what have you, that we're finding ourselves in situations where if we don't go with the flow and if we don't adopt and conform to what's being sold to us, there is a higher risk that, yeah, I mean, you could be perceived, as you said, as a dinosaur or you're you're not staying up to date with the latest technologies and trends and, and that puts us into situations where yeah there is there's a stronger incentive and the neoliberal university to to adapt to the to what we see as the hottest new thing so we can have at least the aura or the presence of being innovative. Whether or not it's actually innovative, perhaps, you know, remains to be seen.

Eric Covery

I mean, those same kind of for the same kind of forces are also driving the university at large to do certain things to adopt those technologies. Right. The kind of demographic cliff in terms of enrollment that were facing, declining college enrollments in general in the midst of a political and social crises. Right. So the university is doing things like adopting AI technology as the cutting edge of education. Right. That kind of teleological line that, well, students can just hit these things with technology. But then also, you know, thinking about the kind of environmental aspect, universities are also attempting to present themselves as carbon neutral or environmentally friendly in lots of ways. So this is another kind of selling point for the neoliberal university. We have this kind of environmental awareness that students, you know, prospective students might be interested and that might cause them to choose us rather than another university that's kind of struggling to survive and maintain its enrollments. So those forces are acting on, you know, us as kind of individuals in the university. And then also. The university at large in that kind of ecosystem it exists in.

Matthew Roberts

And I hate to say, but but this moment so definitely shows that those of us who are thrown in together into how we respond to the situation, sometimes we have interests that are going to be working at cross-purposes. So back in October. Hmm. Was it October? Was it November? I forget. Maybe it was November. I gave a talk on campus with the College of Education and Community Innovation, their BigByte series, talking about some issues related to AI And I tried to sort of know think specifically about ones related to to the academy itself. And I guess without really intending to to start with, I setup a certain kind of very skeptical, hesitant kind of tone about as educators adopting things like generative AI

And then to be honest, in many of the ways in which these products are being sold to us as educators, in sort of the essence of the argument I was trying to make was that is that is such a a double edged sword because to the extent that the institutions we are part of right now have been subject to all these pressures that we just talked about that have shown up in things like the the loss of academic programs and departments, the loss of full time faculty in favor of contingent faculty, which offers all sorts of benefits to the the institution as an entity.

The more that we as individual educators adopt and accept that generative artificial intelligence can do much of what we have traditionally seen as part of our purview, a part of the job of being an educator, the riskier it is because from the outside then if so much of that can be done by AI why are we expecting it of faculty in general? I mean, why why is there not a business model out there for an institution that boils down as much as possible the things that a human faculty member is actually needed for to the smallest set possible and outsourcing in a sense the rest of it to generative AI. I mean, we already have had over the past ten years companies that are willing to help institutions out by designing and to some extent administering online programs to the extent that faculty are largely instructional proctors of a sort responsible for overseeing that students do the work and then providing some sort of feedback and certification.

If AI can come up with a lesson plan, AI can choose a nice, soothing, pleasing human sounding voice. And now since we have generative AI that can generate video, even create a nice, pleasing video avatar to do the lectures, if no, AI can create the rubrics for the assignments. If if. AI can do so many of these things. How long till we see the business model where you know we have AIU, which is, let's be honest now, not only an opportunity for profit for somebody, but it's going to be sold on positive terms. It's going to be sold as this is so much less expensive than a traditional four year residential liberal arts program. And, you know, I try not to be Chicken Little, but it just seems like some of these things they're going to pop up are going to see them. It's such a double edged sword for us as educators. How do we draw the line between admitting what we need to, I hate to say it, give in to, But where should we resist?

Eric Covery

I think in the last podcast, I don't recall which of you it was, but one of you said, Well, AI doesn't have a CV, right? So AI, AI doesn't... I'm skeptical of AI's ability to do well all of those kind of things you just talked about, I think kind of technology, large language models could probably replicate many things about bad and mediocre teaching. You know what I mean? Blackboard was here on campus recently and they were, you know, kind of selling one of these upcoming features of Blackboard where, you know, if a student is performing poorly or is not spending enough time in the Blackboard class, you know, it's going to do all these things to make you and the student kind of aware of that. But I don't think Blackboard or Chat GPT or another large language model is going to hand that students you know, a kind of tissue, you know, in my office when they're crying about, you know, their father being locked up or not having enough money for food on campus, you know. So I think in addition to kind of just good teaching. You know, in general outpacing technology and AI and large language models by leaps and bounds. I think there's also just a human component that these things appear to me to be far away from. You know what I mean?

One of the things that's come up, you know, within just kind of, you know, certainly last summer and into the fall, is that most people believe, you know, for thinking about ChatGPT, that these large language models are going to collapse, that the models aren't sustainable. You know, they've already shown kind of decreasing abilities to identify prime numbers. For instance, they don't even work as good as a calculator. So I have I have a hard time believing that they're going to get much better. They may be more integrated into everything we're already kind of using. And they more maybe more, more or less visible and seamless in our lives. But I don't. It's just like the thing with MOOCs, right? Like that was surely going to kind of replace good on campus, you know, place-centered teaching. But it hasn't. You know, so many kind of technologies have fallen by the wayside.

Matthew Roberts

So, yeah, no, and I want to be clear, I want to be on the record. I don't think AIU will be a good place to learn. I don't. Two points coming out of that, though. And I wrote them down so I wouldn't forget them. One is, I think part of the problem here is that the AI moment we are in is replete with so much confusing terminology. And that's why in another episode we're going to we're going to talk about, you know, what AI is, how it's different from generative AI, because I think that's there's an important part that's not going on. It comes down to, you know, what what level of technological literacy does the average person need to understand these things?

But like you mean you mentioned Blackboard? No. Is it AI to have simple analytics that shows, you know, this student hasn't been in the class enough? In some ways, it might embody behind the scenes some principles of artificial intelligence or machine learning. But the fact is that is so different as an application of the technology from from the generative AI in how that is being pitched. And I think I said it on the first episode this season, I'm not against AI I think AI is an excellent concept, but the fact is, most AI previously—the concept has been developing a model that's based upon specific existing knowledge, trying to capture what it is about, an expert's knowledge in an area, and distilling that into a way that it's repeatable in a sense, which ironically enough, is kind of what the learning process is about in a way. But the fact is, you know, just because your computer system can say, well, if somebody doesn't show up to their classes and then gets, you know, D's on assignments, maybe you should be reaching out to them. That's a far different application of intelligence than the ways in which we're talking about generative artificial intelligence.

But then I think another thing that's important here, and this comes to a point that we were all talking about before we started recording today was I think one of the thing is that this moment of artificial intelligence is showing is not so much just about technology, but it's about us as humans. And this concept of "good enough," where are we willing to place that good enough flag or pin on the list of all the things in our lives we have to worry about? Because, you know and I know and I'm going to presume, Jacob, do you know this, too, that, you know, probably AI is not the place we want to go learn or if we want our loved ones to go learn. Right. Can you commit to that?

Jacob Fortman

No. Yeah. Yeah. I'll give you a thumbs up on that one. Yeah.

Matthew Roberts

But the issue is not that we know it. The issue is that it will be good enough for a lot of the population. It will be good enough for the student who doesn't realize why they need the human connection. It'll be good enough. And some of this comes back to the whole helping students learn more about learning, although, you know, getting them to act on it beyond that. Because, I mean, one of the things that comes out of research is that student... student... I'm lacking the term. Not retention. There's a... Persistence? Yeah, persistence. One of the things related to self-reported happiness about their education and persistence through their time in education comes down to the extent that they create actual personal relationships with staff and faculty. AIU is not going to give that to you. But I mean, that's also not knowledge that most people have.

So I mean, and I guess part of it and this is kind of, I think, why we're doing this podcast and part of I think what Jacob and I, if I can speak for Jacob here as well, are coming to believe is that we need to be having this active conversation because there are meaningful, meaningful consequences to the choices that we make here. And. Yeah.

I don't. AIU would be the next, you know, phase of the whole for profit education institution, right? Which, I mean, we won't name the big ones? And we won't slander anyone here. But I mean, that will be the next stage, right? Because for profit institutions were generally seen, are generally seen askance by most in traditional education but they still they persist.

Jacob Fortman

Yeah. So on the topic of AIU and whether you know, AIU will be "good enough" for a large population of people, I think most people would probably pretty quickly realize that it isn't good enough for a lot of reasons that we've already discussed. You know, AIU is going to be outsourcing an extreme out of labor to the Global South and, you know, exploiting a lot of vulnerable populations by doing that, AIU is going to be detrimental to our environment. AIU is going to be dehumanizing, that we're not going to have as many opportunities to connect with faculty and staff. And so for all those reasons, you know, I can't see an AIU an... an AIU being, not only is not good enough, I only seem to think it would be profitable for anybody that wants to try it at this point in time. I don't... I think that people will try it probably perhaps in the next 5 to 10 years. I think it will be an attempt to to make something like that happen. But I think most people will, I hope, will determine that it doesn't meet the line of good enough for them.

Matthew Roberts

Okay. So first of all, I'd like to apologize for laying out the phrase AIU, because it's simply far too hard to say easily. I don't know if U of AI or something is easier, but so I'm going to take the moment and maybe we can use this as a segue. I'm going to take the moment to play a game that I enjoy greatly, which is devil's advocate because I don't know. I'm feeling pessimistic enough on this day that I'm not certain that AIU would would falter as quickly as you do.

Now, we're not going to point out that, you know, we both come from U of M's right? So, I mean, you matriculated at the University of Michigan. I come out of the University of Minnesota, at least for my Ph.D. both fine institutions, right. So we hope that, you know, there would be a nice difference between that and AIU. But I still think that good enough impulse is a very strong one. And surely enough the first round might falter. Right? But Milton Hershey went bankrupt how many times? Eventually he got the chocolate formula.

Right. But so to segue here is to the extent that some of those problems that we've been talking about are things that could doom the success of AIU, are they all uniquely persistent challenges? So you mentioned two in particular, Jacob, which we've talked around here. So one is the issue of the use of the use of labor in the Global South related to the way in which AI models are trained, although I think that's even been imprecise. It's not about the training of the models. It's about telling the models to shut up when they're saying things they shouldn't. But beyond that, the question of the environmental impact. So, I mean, I personally feel like there's a difference between those two kinds of issues. I don't, at least at this moment, see that the need for human labor and let's be clear, the number of aspects in which this labor outsourcing is problematic is more than just issues of standard of living and such.

This conversation shares most of the issues that come down to... that come with social media moderation. There still is no good system that avoids the need for humans in identifying material that is truly objectionable, whether it's from the point of view of violence, sexual exploitation, you name it. And there are plenty of stories out there about how the social media industry has been dealing with that in terms of content moderation. At the moment, I don't see that there's an easy solution to that. But if I'm going to put on my hat as "believer in the march of progress," I mean, I could claim that the environmental impacts of artificial intelligence are a temporary problem, right? To the extent that the data centers and computers in those data centers that both develop the models and then serve the models to customers, need access to electricity, well, we find ways to make them run cheaper, right?

The cell phone in my pocket uses a lot less energy than the one that would have in my pocket ten years ago. We find cleaner sources of energy. I read in a story somewhere recently that Microsoft is thinking about getting into the nuclear energy business or at least try to encourage the development of more nuclear plants so that they have better energy for their data centers. And to be clear, that's not like a problem driven just by AI. I mean, I think Apple, when they built some of their first data centers in North Carolina, I think they made a point of, you know, planting a nice solar farm nearby because, well, you know, if we're going to need power, let's make it clean power. Fine. But that that's a technical problem with a possible technical solution. If great amounts of water are required to cool those computers, well, what do we do? We find ways to make it more efficient. And there's stories out there about the development of systems that use less water.

Microsoft is experimenting with taking data centers, basically putting them in containers and dropping them in the ocean. Why? Because then you get the ambient cooling effect that comes from the seawater around it. I think there was a story about a data center in Finland where to cool things they pump in cold seawater and then they use that to cool. I mean, these are even. I mean, I suppose we can take steps beyond, you know, water itself. Those are technical problems with technical solutions.

Yeah, who knows? Maybe generative AI will come up with some of those answers for us. Right. Because the stories I've been seeing lately are about how great AI is at material science. Right. There is some secret magical material that's going to make this all possible, like transparent aluminum—any fans of Star Trek...

Eric Covery

Excellent Star Trek reference. Developed in San Francisco, which is incidentally, the place that Google wanted to use the San Francisco Bay, to cool its centers and the NIMBY movement in San Francisco said, not in my backyard. Right. We're not going to let you warm the bay up. So they went to Finland. I'm surprised Finland let—or is it Microsoft, no right it's Google in Finland, Right. Who's, you know, using the kind of water there which certainly has to be environmentally detrimental. I mean, you know, on the kind of scale they're doing this and then sinking this into the water.

You know, I read too Microsoft had this something it called adiabatic cooling. So it's, you know, basically using computer models to cool its facilities, I think could save electricity, which sounds a lot like what my Nest does—that doesn't sound kind of revolutionary, but these large language models like generative AI are often being as the solution to the problem itself, right? Like, well generative AI will get so good or perhaps even become a kind of general intelligence and will be able to give us the solution to all the technological problems that we created by creating it.

And, you know, maybe, you know, some of it. I mean, you know, if we're thinking about crypto as the previous or one of the kind of previous environmental technological disasters, you know, Etherium I think, changed how they verified Bitcoin transactions and saved like 90% electricity. I don't I don't know if Bitcoin's done that at this point, but Bitcoin is, you know, I guess not as consuming tremendous amounts of electricity.

And the question too becomes, you know, who you know, there are environmental consequences, you know, whether they're solved in the long term or not, you know, where are those environmental consequences most likely to take place? Right. They're obviously most likely to take place in places with weak regulatory enforcement mechanisms for environmental regulations. You know, there needs to be a kind of like supranational thought given to this. You know, one of the examples given in one of these articles, right, is that it's much cheaper for a data center to be cooled in, like Microsoft's Washington data centers, for instance. Right. Are much, much cheaper or more kind of environmentally responsible to cool those than if Microsoft builds data centers in Southeast Asia, for instance, which might offer tremendous kind of financial incentives and in fact, might even benefit the economies, you know, you know, of Southeast Asian nations. But the environmental consequences are like five times is worse. It takes like, you know, all of this kind of additional electricity and water to cool these facilities.

So, I mean, this comes back to the question of kind of the environmental crisis in general. Is this something that can be solved? I don't know. It definitely can't be solved by universities. Could it be solved by individuals? No, it can't even be solved by nation states. So like, what is the what's like the larger movement it takes to, you know, incorporate technology in an environmentally and kind of, you know, human ethics responsible way.

Matthew Roberts

If you've been looking at your podcast player, dear listener, you've noticed that this episode has been jampacked, and I guarantee you there is even more conversation to come. But in the interest of keeping you on the edge of your digital seats, we're going to pause for the time being and we're going to bring Eric back for another episode to continue this conversation. So for the time being, thank you for listening in your podcast app. Be sure to hit subscribe, like, follow, all those things that people in social media things say to do. We're going to sign off for the moment. So thank you, Eric. Thanks, Jacob. It was good to spend some time with both of you.

Eric Covery

Thank you, Jacob and Matt, of course, for having this conversation that's been such a long time coming. I'm kind of really excited to be to have the opportunity to continue this.

Jacob Fortman

It's been a lot of fun.

Matthew Roberts

All right. See you.

Siri

T-Squared, a teaching and technology podcast, is produced at Grand Valley State University.

Our theme music is from Bill Ryan and the Grand Valley State University New Music Ensemble.

Do not allow children to play in the dishwasher.

This production has not been approved, endorsed or authorized by the Federal Bureau of Investigation.

Offer valid only at participating locations.

Return to the episode page