T-Squared: a Teaching and Technology Podcast

Season 3, Episode 1: Welcome to the Season of AI

Join the conversation!

This episode transcript has been enhanced with social annotation tools from Hypothesis. By creating a free account you can highlight text in this transcript and add your own comments or questions. Other users will be able to see your annotations and respond in kind.

For more information, please visit they Hypothesis Quick Start guide.

Episode Transcript

Matthew Roberts

So, hey, Jacob, do you have a favorite reboot, like a TV series or a movie franchise that, you know, has come back from the dead and is seeing new life?

Jacob Fortman

Yeah, that's a good question. You know, I haven't gotten around to watching it yet, but I'm really excited by the Futurama reboot. I'm hearing good things about it. And I just really loved that show as a kid. So yeah, I would say Futurama for right now, but I'm going to I'm going to hold my official opinion until I've had time to watch it myself.

Matthew Roberts

Reserve your judgment. Yeah. No, I don't know. We were. We were just, uh, dear listener at home. We were just talking about this beforehand. And you know, what actually counts as a reboot? Because there's a lot of things--my wife and I, we've been watching the original Night Court. It's not something that I watched at the time, she did, but we were prompted to do that because there's a new Night Court with, I mean, some of the original cast, but, but not most of them. John Larroquette I think in particular is the main returning person. But you know, does that count as a reboot? I mean, I mean Star Trek, I mean they did actually they have a I don't even know how many shows anymore. There's a million of them. But I mean, they rebooted the movies at one point with J.J. Abrams at the helm. I had something else in mind. What was I just thinking about? Oh, I'm like maybe one of the classic, I think self-referential nods to a reboot has been with the Spider-Man movies because, I mean, the Spiderman franchise has been restarted like, like three times by different directors and studios and different Spider-Man's. And it was one of the recent Spider-Man movies. I mean, maybe, uh, was it the most recent live action one, I think where they actually had all the different Spider-Men played by all the different actors, and it was kind of like this moment where it was just like, "Yep, we know this is weird, but we're going to tie it all together through the concept of the multiverse." That's how you explain everything in this day and age. I don't think we necessarily need the multiverse to make sense of ourselves, but the reason we're talking about reboots is because I guess we want to welcome you to a reboot.

This is T Squared, a teaching and technology podcast. And if you're a new listener, hey, that's great because it's been a while. We started this podcast and I was actually wrong. I was thinking about it. I'm like, It's three years now. It was like more than four years ago. And along with many things in life, the universe, and everything, the podcast kind of came to a halt when COVID happened. I think I put together a very quick like eight minute episode that kind of like was just the end of it. Here at Grand Valley State University where we record this podcast, lots of things have happened since then. Recently I sat down for a while and was thinking maybe it was time to come back, so we've decided to do it.

I'm Matt Roberts. I'm a senior instructional design specialist here at Grand Valley State University. I work in the E-learning Technologies Team, which is part of Information Technology, and I'm joined by Jacob Fortman. Jacob, introduce yourself.

Jacob Fortman

Yeah. Hi, I'm Jacob Fortman. I work on the Innovation Research team here at GV, my title as emerging technology research analyst. So I'm really thinking carefully about the research that goes into emerging technology and how it benefits or doesn't benefit learners, specifically here at the Grand Valley University campus. So, you know, my projects kind of range from the informal kind of evaluation projects or seeing if technology works or not. Does it work as advertised? Does it make sense within our GV ecosystem to also more formal research projects. You know, mostly qualitative research, looking at learners experience, learning with different types of technology and trying to understand why does it work, who does it work for, and when does it work for them.

Matthew Roberts

Yeah, and this is just one of the examples of the changes that have happened in four years prior to the pandemic, there was no component like that here at Grand Valley. And one of the things that has happened is we've actually added a lot new staff, some different directors and organization in IT. And now we have this new unit focused on innovation. If you are one of the two people who've ever listened to this podcast before, I think you might remember that Eric Kunnen was at the other microphone here. Eric--not that we're purposefully cutting him out, but Eric is a busy guy. Eric is now actually--I forget what his exact title is--but, let's just say he's the director of the whole team related to innovation and research. Kind of a dream of his, I think, to get that team off and running. And then also then because of that, I am happy that Jacob is part of the institution. And we were also just before we started recording, talking about the recent technology Week here in Grand Rapids. Can you say just maybe a little bit about what that's about? Because you're much better in a position to explain it than I would be?

Jacob Fortman

Yeah. So at Grand Valley, you know, we were part of this Tech week in Grand Rapids. We had a few different kind of roles to play. We highlighted a variety of different technology projects at our Innovation Design Center downtown. We got to see a bunch of different student staff and faculty projects where they're working with, you know, augmented reality apps. They're working with smartphone apps, they're working with 3D printing. So that was just kind of your smorgasbord of everything technology at Grand Valley. But then we also had a great event, at the Blue Bridge, where we got to visualize the future of the Grand Rapids campus--the Grand Rapids downtown campus, using augmented and virtual reality. So we got to see what the future Eberhardt building will look like using smartphone applications. And we got to see what the future soccer stadium will look like by wearing a virtual reality headset--kind of flew you over to the soccer stadium to see what that new building will look like.

Matthew Roberts

We're getting a new soccer stadium?

Jacob Fortman

Yeah, apparently. Yeah.

Matthew Roberts

I knew nothing about that. Well, that's interesting. There's a lot going on here at Grand Valley and in Grand Rapids, Michigan, where we're located. But we're happy to welcome you into this conversation no matter where you're at.

In talking to Jacob about, you know, restarting the podcast, there was something on my mind that's been, I think, on the mind of a lot of people in technology as well as in education. And I can't really even say lately because at this point when we're recording, it's the better part of ten months or so. I'm referring to the rise of ChatGPT, which I think I'm almost at the point where you're seeing mention of ChatGPT or other kinds of AI tools pop up everywhere. It seems like it's the thing you have to do right now--is you have checkboxes. You have to make sure that you ran whatever you're writing or doing through Grammarly. And then you have to make sure you have your obligatory references to artificial intelligence.

But, you know, despite that, I personally am really kind of interested in the topic. Jacob is too. And what we sort of have been talking about is, you know, first of all, let's reboot this T-Squared podcast and then maybe let's try to have a thematic focus. I know there are podcasts out there where every season kind of has a subject, right. True crime podcasts are like that, where they go through one one murder or one whatever per season. I think we're hoping to avoid death and mayhem in this one. But there's a lot to be said about artificial intelligence. I'm just going to say for my purposes, I think I've already said this to Jacob. A lot of the conversation about AI has started to get kind of repetitive from my perspective. And I mean, I suppose you could be cynical and say it's because most of the articles maybe about this are being written by AI itself. So I mean, it's just that the same thing is coming out. But you can almost predict when you start reading an article on AI what it's going to say, right?

Jacob Fortman

Yeah, yeah. I do think the discourse is a bit repetitive and at times feels a little bit shallow.

Matthew Roberts

What we're going to do and this is the plan. We don't have it all planned out. We're going to let it evolve. But we'd like to take this podcast over the next season. We want to look at the question of artificial intelligence, especially in education, because this is T-Squared, teaching and technology. It's also because, I mean, where we work and where we have sort of, you know, planted ourselves life wise is in the academy, But we'd like to take the conversation and we'd like to maybe get beyond the surface, right? So I don't think we're going to have an episode which is titled Ten Ways to Have Your Students Use ChatGPT in the Classroom. I'm not interested in that, are you?

Jacob Fortman

I hope we don't do something like that. No, no.

Matthew Roberts

At the same time, I don't want an episode that would come down to ten reasons why you should fear the existence of artificial intelligence and how to ban your students from using it.

Within those sort of those parameters, I think there's a lot that we can talk about. I have some particular interests in thoughts about artificial intelligence when it comes to, you know, what we're doing in the academy and how that relates to what we should be doing. What is it that we do when we educate and what are sort of the ways that it works best to do it? And, you know, how can artificial intelligence play into that? What about you, Jacob? What are your interests?

Jacob Fortman

It's it's a fascinating subject because I think that when you look at AI within the broader kind of discourse of education technology, it kind of plays into a lot of the themes that you see in education technology historically. So I'm always interested in, you know, when we're thinking about innovating education, improving education, you know, innovating towards what and improving towards what ends. And so, you know, AI brings up like these really fundamental questions, as you're saying, about why do we educate? What is the purpose of education and in society? So some of the things I've been kind of, you know, fragments of ideas kind of buzzing through my mind has been the emphasis on efficient learning. You know, does learning always need to be a process that is totally efficient? And if so, does that contradict notions of, you know, humanizing education or slowly learning together? Does AI always need to be, you know, seen as a tool for preparing learners for the job market? And if so, you know, is it the role of education to be a job training center versus a liberal arts education dedicated to helping people be creative and collaborative and democratic citizens. And so I think AI is implicated in all these types of questions about, you know, why do we educate? Why do we learn? What's the what's the purpose of this institution? And so I'm really excited to dive in more.

Matthew Roberts

So my background in part is as an undergrad philosophy major and whose time in grad school had a heavy theory component to it. So I yeah, I always I always kind of try to think about the implications or think, you know, underneath the surface. And I think there's just a lot of room in this conversation about artificial intelligence to to start asking those questions because a lot of the conversation that I'm seeing--I've listened to a bunch of podcasts about AI now, I've got a growing list of readings that I've bookmarked away, and a lot of it sort of just glosses, you know, at a very surface level. And I think there are some really important questions that we need to be need to be talking about.

And I will I'll show my cards here. Personally speaking, I feel like I'm a little bit of a unusual case sometimes because technology has has always been a part of my life. I'm old enough that the computers that we used in elementary school were--were much older than anything you see now. I mean, I'm from the era of the Apple IIc and the Apple IIe, if that even is meaningful to people who are listening at home. And remember a time where we had, like, you know, a couple of computers in the whole school or maybe one in the classroom. Definitely not the, you know, the 1 to 1 programs that we have now. I mean, I had some very advanced thinkers among my elementary school teachers, I mean, who had us using computers, you know, to the extent that that was really possible at the time. I mean, I started programing in fourth grade, which, I mean, is more impressive if we figure out that this like in the middle of the 1980s and just technology has always been there with me.

But at the same time, I'm often skeptical. I'm skeptical of miracle claims of what technology can do. I'm skeptical of how big a change something is going to cause. And I think we're definitely at the moment now where a little bit of skepticism and a little bit of thinking through is hopefully interesting to other people besides just me and to Jacob.

Jacob Fortman

Yeah, I think for me, you know, it comes down to this point that I bring this up a lot, but technological innovation doesn't always translate into educational innovation. And so I think that--that--that remaining skeptical and remaining a little bit hesitant to always be the first to adopt the new technologies is a healthy thing, particularly for us in the academy. You know, because the university and the academy has a very long history, you know, dating back to the founding of the United States. I mean, even beyond that, you know, if you look internationally like the academy and the universities have a much larger history, you know, even in the United States. But within the United States, you know, you look at the Harvards and the Yales and I mean, these are institutions--they're very old, and have very long history to them. And I think we need to think really critically about, you know, the technology that we're adopting and why we're adopting it and how does this, you know, relate to the larger history of universities and what we're here to do?

Matthew Roberts

And I mean, I'm sure we'll come back to this at some point, but I've harbored a suspicion for a long time that one of the things that might be important for us in the academy is to be very conscious of how we always have one foot in the future, but one foot in the past. Because when it comes to having that foot in the past, the knowledge that we've built as as communities, as societies is something that we treat with a respect--but also then you know that we we offer on towards the future while then at the same time not knowing what the future--what it's going to bring. There aren't a lot of other places within our society where we need to both be ready to move forward, but really have an ethical responsibility to think about how we're moving forward.

Jacob Fortman

Yeah. Yeah. Well said. Absolutely.

Matthew Roberts

What are we looking forward to over the course of this season? Well, I think the very first thing that we need to do and Jacob and I were wondering, you know, should we just talk about it now? But no, we're going to let you digest this not too long intro episode. And then when we come back in the future, the first thing we need to talk about is just what is artificial intelligence? I was thinking about this the other day. I think in general, I am a fan of artificial intelligence, broadly considered. But the thing is, a lot of the conversation that's happening right now and you don't need to be a computer scientist, but by no means I mean, my Ph.D. is in political science. All my computer science knowledge is kind of, you know, self-developed. You don't need to be a computer scientist. But it's important, I think, to understand that a lot of the hype that we're seeing right now and I mean hype a little pejoratively, but not purely, but a lot of what's coming out is really a fervor for a particular flavor of artificial intelligence. And the important thing is it's a different kind from a lot of the artificial intelligence that, I mean, to be honest, researchers have been working on for decades.

Artificial intelligence is not a new concept, but this idea of generative artificial intelligence is meaningfully different. And I think it's meaningfully different in a way that can be problematic based upon what people are expecting it to do. So we need to talk about that. One thing that I do want to say is Jacob and I, you know, we want to make this a broader conversation. So we're hoping to bring in some guests who have deeper knowledge than we do in some of these areas, because, you know, we're not the only ones with interesting things to say, but especially than understanding, you know, what AI and generative AI are, and then that term which you're also hearing: large language models--LLMs--we need to get those terms out on the table because I think once we once we have a good understanding of what those are, that's when we can really start digging in and thinking about the implications of some of these things.

Because I think there's certainly one conversation which is about AI in education, but I think the conversation about generative AI in education is almost a little bit different because a lot of people are expecting things to be possible, which I want to take some time to think about whether they really are or should be.

Jacob Fortman

I have a computer science minor from undergrad, but most of my kind of technical understanding about AI, gen AI, or machine learning is very, very much self taught, self-developed through, you know, YouTube videos or just kind of scattered online readings. But I'm really fascinated by this idea because it's it makes me wonder, like, what does a basic AI literacy look like? What are the terms and things that define a basic AI literacy that the broad kind of layperson should at least know about? I mean, is it does that include labeling, understanding the process of labeling? Does it include differentiating between supervised and unsupervised machine learning models? Do need to understand a definition between discriminative versus generative deep learning models. I genuinely don't know. And it kind of to me seems like we're kind of in the Wild West when it comes to like defining what it is that a layperson needs to know when we're thinking about, you know, what is generative AI, and how is this different from other types of AI? You know, I you know, I mean, look at computer science. I think there's a much better defined kind of definition of what it means to be a layperson. And in a computer science class, you know, you have yeah, freshman level classes are teaching Python, the basics of object oriented programing. And I think that's kind of like a like a home base for many people. But I don't know if we have like that same kind of home base when it comes to understanding machine learning and large language models and generative AI.

So I think that's one really interesting kind of puzzle I'm kind of working around. When I was in one of my grad seminars, I was in a seminar called The Politics of Code.

Matthew Roberts

Ooh.

Jacob Fortman

And there's this very first seminar. There's this question around, you know, because I was with humanities majors, I was with, you know, English PhDs and communications PhDs, and none of them knew how to code. And we're in a class called the Politics of Code. And there's this large conversation around like, you know, do I need to understand algorithms and coding in order to make, you know, serious claims about the the political and ethical implications of coding and algorithms in our daily lives? And somebody made the claim, you know, I don't need to understand how it engine works to understand the impact that cars have had on American society and in our daily lives. And while I while I somewhat resonate with that, I would also say, like most people do have a lay understanding of how engines work. We've established the basic kind of foundational knowledge that most people should have about engines. You know, they take gas, they have cylinders, they combust the gas and the engine, and that propels the cylinders. And so, people might not think they understand engines, but I think it'sso well defined what a basic understanding of of engines is, that we may not feel like we have like that technical understanding. And that's kind of where I'm at with generative AI now, is like what is the the equivalent there that we that we can all just kind of accept this basic understanding and then have larger kind of philosophical or ethical conversations around this. Really excited to dive into that more. And I, I will not pretend to be the technical expert here on any of this. So hopefully we can lean on some guests to fill us in there.

Matthew Roberts

While you were saying that, two things came to mind. One is I'm not even sure where it's from. I want to say maybe it was from an episode of Parks and Rec, maybe it wasn't. But you're talking about, you know, what is the level of understanding about internal combustion engines. And then this particular whether it was on a TV show or a movie, they were talking about electric cars. And I mean, one of the common perspectives on alternative fuel vehicles, right, is that they're they're better for the planet because there isn't internal combustion. But the point that this was, you know, this bit was making was that, well, you know what? The average person doesn't think, though, about where the electricity that powers the car comes from. We don't have, you know, magic trees that grow electricity like jellybeans or something, right? I mean, the electricity still has to come from somewhere. So, I mean, if you really do want to understand the impact, you got to be able to think about that. So we need to bring that information in to sort of broaden the conversation.

The second thing I was thinking, and I think I really want the syllabus from that politics of code class, because that's that's like me--because I said my degree's in political science--but I'm somehow in this world of technology. But I think there's a lot of parallels between these issues about, you know, how we relate to and what we ask of generative AI that parallel a lot of a lot of other things. I mean, that the moment perhaps in our communal history that just before this one that's relevant is the rise of somewhat less capable artificial assistants like Siri or Alexa. I mean, I see my own kids, you know, if they have a question, they'll ask it Of the Amazon Echo we have in the room. You know, there's an information literacy bit there, right? Just because Alexa says this is the answer to something, on what basis do we necessarily accept that? I mean, librarians in higher education and, you know, K-12, too, have for years now been trying to make sure that we have information literacy as part of our curriculum, because just because you click on the first Google result doesn't mean you necessarily have what is the right or correct answer to something. We first had that when, you know, search engines came into play because now you didn't have to go to specialized things like search databases. So you had this idea of generalized search. And then we have smart assistants and now we have we have tools that potentially are maybe even better able to answer our questions and give us information.

Matthew Roberts

Side side note, side tangent, because, you know, I'm a political scientist--I think a lot about democracy and a lot of these important decisions come down to who are we trusting to do what for us?

Alexa

Something went wrong. Please try again.

Matthew Roberts

I've got an Amazon Fire tablet on my desk and apparently Alexa was trying to figure out what to do with that. That that's great. I couldn't have scripted that better. But, you know, in democracy, I mean, we've made the decision through hundreds of years of political thought that the way we should, you know, run societies is really by having decisions made in a collective sense through the consent of the governed. Now, in reality, that doesn't happen, right? This is not going to be a political commentary, but we all know that the average person would prefer to do other things with their time than analyze legislation. And there are all sorts of systemic problems with getting the voice of the public into the halls of government. But I mean, I think that's a broader issue here, you know, that we deal with as a society, which is who do we trust? Why do we trust them? And on what basis do we trust them?

Ooh, big questions. Let's see other things we've started thinking about. Jacob mentioned the idea of efficiency in teaching and learning. And you see this on both sides, right? I don't think students should be spending extra time on things that don't make sense. And faculty feel that same way. Here at Grand Valley, we use the Blackboard Learn Ultra learning management system, and they've started adding features. I have just the other day used one which helped me choose an image to connect to a learning module in my course. That was great. It saved me a little bit of time because I was used to just pulling up the Unsplash website by itself, but it did it for me. But they're rumored to be adding features that will help save faculty time by doing things like writing learning modules or writing quizzes. I have lots of questions. Those of us who teach in many instances have put lots of dedicated effort into getting into the position where we get to teach. The knowledge and skills that we've developed are kind of what our credentials for the job. I haven't seen where most AI systems have gone to grad school. They don't have CVs. So yeah, lots of things that we could talk there about. You know, we like efficiency, but what is efficiency a problem?

Jacob Fortman

You know, I think for me, the question of efficiency and at what point does it betray the kind of the relationship building between students and instructors that is so important, or between students and students in the classroom? So, for instance, when you mentioned, you know, AI that's kind of developing course material for instructors as a way of making their job more efficient, you know, that to me seems like it's it's eliminating some of the very foundational human work of building a relationship with your students where you develop questions that are, you know, responsive to the difficulties or questions that, you know, you've seen in the discussion before. It perhaps is, you know, taking away this opportunity to to connect with your students on a human level. And so, like, I totally in favor of, you know, efficiency and or eliminating tasks that perhaps are less, less important or less necessary in the teaching and learning environment. But at a certain point, I don't think efficiency is an end in itself. I think it's something we should always be gravitating towards, but something that we should be critically examining and thinking about what what is being foreclosed when we emphasize efficient learning processes And is that thing that we're foreclosing an important part of our learning environment.

Matthew Roberts

And, you know, it's not just even in education. I mean, just prior to our getting together here on microphone I was part of a different conversation about A.I, and someone mentioned that there are starting to be tools that doctors, for example, can use in terms of preparing those like, you know, after visit summaries like, you know, taking it and putting together and, you know, some sort of form. And, you know, medicine is another field where relationship in theory is part of what's going on. But my joke was, well, you know, if the doctor is, you know, halving their work because now the AI is writing up all this stuff, does that mean we get a discount on the bill? Because I mean, and I meant that frivolously. But there's a certain seriousness to it, too, right? Because we pay for doctors expertise and we're told that the expertise of doctors, the the scientific expertise of the research and development process for medications--these are all things that explain, at least in the American context, why prices for health care, for prescription medications. are so high. In theory, if AI has some role to play, as I say, if because I think Jacob and I are trying to lay out that maybe we shouldn't just hand over all these opportunities right away, we should think about them. But, you know, in theory, there should be a benefit that accrues not just to shareholder, you know, bottom lines, but also to to everybody else who's a part of the system. That just occurred to me.

Jacob Fortman

Yeah, absolutely.

Matthew Roberts

So other things, you know, there's lots of potential legal issues that could pop up related to AI. We're already starting to see well-known authors who are trying to find ways to bring legal actions against against the companies that are developing large language models, often under the idea that because these models are being trained on large volumes of text, which include, you know, perhaps even the entire corpus of someone's, you know, published novels, that there might be a potential kind of copyright infringement involved there. And, you know, I do teach a class that's about the law, and I teach it to people who have no experience in the law, and one of the first things that they're often amazed to discover is that sometimes the answer to whether something is legal or not doesn't exist until a court actually starts thinking about it. So, I mean, is is the training of a large language model on copyrighted text a copyright infringement? Well, I mean, we can talk about it, but but right now it's unclear. So, I mean, the first time this kind of thing hits the legal system, we'll start to get some sort of answer on that.

But, you know, here in Michigan, we have a court case right now involving students who--I'm trying to remember all the schools; I think they're suing Central Michigan, Northern Michigan, and maybe it was Eastern Michigan. If you're a representative for any of those institutions and I just suggested you're being sued and you're not I apologize. I did not look this up beforehand. But the heart of the matter is that the students are claiming that the move to online instruction at the start of the COVID pandemic represented a kind of breach of contract that the students were expecting and were supposed to receive an in-person delivery model for their education, and they were forced to go online. I don't know what's going to happen. Personally speaking, I think it might end up being a hard case to make, but it's laying out this idea now that in theory, how students do or do not receive what they're expecting in their education could be a legal matter. But, you know, if that's the case, it's going to be not terribly long, probably before we see the first case where a student did or did not use generative AI in completing coursework and they are failed, suspended, expelled because of it. There have been situations like that in the past related to just general academic dishonesty. And I mean, I presume that the same thing will happen because then it's a question of what can be proved, what is appropriate, and it's going to happen.

And I think at some point, I mean, we'll likely see a case where, you know, what happens when a student discovers that their faculty member generated module three and their entire final exam with artificial intelligence. Regardless of whether it's a good idea or not, from a professional point of view, there's room for problems there.

Jacob Fortman

Yeah, I think you've already seen some of the the legal difficulties with you know, I think there's been some cases already with Tesla and automated driving cars when they crash into something or somebody dies who's culpable in these instances. So the question of culpability is really interesting to me. You know, is it that the AI programmers fault? Is that the that the instructor who implemented this AI but then to carefully track the AI for mistakes? Yeah, lots of lots of really interesting legal implications there.

Matthew Roberts

Yeah. And I mean, without getting too deep into the weeds and acknowledging that I don't fully understand how Tesla's developing or any of the automakers developing their their fully automated driving. But to my knowledge, those are systems where there is a heavier involvement of human let's call them trainers, right? In terms of making sure that the system understands what it's doing and is following, I mean, is that supervised learning that is sometimes talked about with machine learning. Large language models don't don't have that. I mean, aside from a little bit on the backend where like apparently there are categories of people who like, you know, I guess, give prompts to ChatGPT and if it says something that's too offensive or too dangerous, they have a way to say no, don't give that answer. The important difference there is that's not in the training. And, you know, I'm just thinking about this. I mean, there's a lot of conversation about bias in artificial intelligence, for example, when it comes to facial detection algorithms. If you have a machine learning model, which is basically training itself by absorbing all this data and then coming up with some sort of internal representation of what it is, if we have a situation where we've got people after the fact limiting what responses it can give, we're not actually removing bias from the model. In a sense of what we're doing is we're saying "Model, we know you're biased, just be quiet about it." I'm not sure that sits well with me. But legal issues, issues of bias, efficiency. I'm just constantly, you know, interested in the topic of expertise and knowledge. I think we hinted at this already. It's something that is always lurking under the surface in education.

And, you know, if you're not part of conversations about pedagogy in higher education, I mean, we've been talking about concepts like should educators be the sage on the stage, the guide on the side, you know? To what extent do we emphasize their expertise in some areas? To what extent do we want to try? And, you know, I think those are all complicated conversations, But I mean, they get back towards something we've already talked about, which is what does it mean to learn, what does it mean to help someone learn? Then, if we're going to welcome artificial intelligence in as part of our learning community? Uh, yeah. How do we do that in a way that's appropriate? These are some of the topics we're thinking about. We don't have like a 1-800 number that you can call into, but we would love to talk about things that you're interested in as well. I'll try to make sure that as the season goes on, we we get information out on the socials, as they say. But, Jacob, I'm looking forward to these conversations.

Jacob Fortman

Yeah, absolutely. I think there's a lot of really interesting and complex and yeah, just really, really interesting conversations about AI and its relationship to education that we can dive into. So it's gonna be great.

Matthew Roberts

Yeah. And I hope it's timely too. One of the things that I said to Jacob when I was proposing this was, you know, we're at the point where we're seeing all this conversation. Like I said, it's, you know, some of it's becoming formulaic, but we're getting to the point now where a lot of people are realizing we need to have serious conversations. Here at Grand Valley where we work, there hasn't been any official decision about, you know, an AI policy or anything. But there are lots of people in lots of places who are, you know, talking to each other and hoping that, you know, even if there's not an official policy, there's at least some guidance because people are people are curious, people are uncertain. Some of that is largely driven by sort of the fear side of things, right? But I also think that there is a lot of possibility for this this moment of dealing with the AI reality to be positively transforming for education. There's a lot of old practices, a lot of things that have been sort of standardized as part of education. There's another podcast out there. You know, maybe if we shut out to them, they'll shout out to us. I believe it's called Dead Ideas--it's either Dead Ideas in Teaching and Learning or Dead Ideas in Higher Education. And I saw that their most recent episode, the title at least, sort of led me to believe that there was going to be a conversation about how some things that have hung on in pedagogy--maybe it's the opportunity now for with the challenge that AI is presenting, to sort of rethink some of what we've done. A lot of people don't change the way they teach easily. If something like the the fear of AI can help lead to change-- which I mean from my perspective is positive change focused on learning. I think I might be okay with that. Mm hmm.

At any rate, please tune in. You know what? Tune in. That's like a I mean, can you tune in on a podcast? You tune in on a radio. But I don't know. Terminology.

Jacob Fortman

Yeah. What does one do with a with a podcast? Yeah.

Matthew Roberts

You don't say dial in because you don't have a dial. Well, be sure to set your podcast app. Hit the subscribe button. Like. Leave a review. I mean, all those things that people say at the end of videos. We look forward to having you as part of our conversation over the course of the season. And...yeah.

Jacob Fortman

Awesome, yeah. Looking forward.

Matthew Roberts

Have a good day, Jacob.

Jacob Fortman

You too.

Matthew Roberts

Alexa, end the podcast.

Alexa

I'm sorry, but I don't have that skill.

Matthew Roberts

Oh, Alexa. You're too modest.

Alexa

T-Squared, a teaching and technology podcast, is a production of Grand Valley State University's E-learning and Emerging Technologies Team. Our theme music is from Bill Ryan and the Grand Valley State University new music Ensemble.

No animals were harmed in the creation of this podcast.

This episode was transmitted on 100% recycled bits.

Caution--hot coffee is hot.

Return to the episode page