T-Squared: a Teaching and Technology Podcast

Season 3, Episode 3: A Dose of Technoskepticism, with Eric Covey

Episode Summary

In the second part of their conversation, Jacob and Matt continue their discussion of AI ethics with Dr. Eric Covey from Grand Valley State University's Department of History.

To help find a truly international solution for climate change, they enlist the help of ChatGPT itself. Reading through its ideas leads to a conversation about whether large language models actually contain knowledge or whether it's something else. Discussing knowledge leads to the question of how creativity relates to randomness, and whether LLMs can truly be creative.

The conversation turns a bit more technical as they consider the idea of information compression as a way to understand how large language models operate. The analogy helps illuminate why there are two different types of bias present in LLMs.

The episode includes a discussion of the concept of "openness" and the financial costs involved, raising the question of how "open" generative artificial intelligence models are—and whether they can ever actually be truly open.

In the final moments of the episode, Eric relates a story about how one of his students used ChatGPT and didn't notice problems in the generated text. The group returns to some of the broad themes discussed, including the problem of what counts as "good enough", the need for technoskepticism, as well as the importance of digital literacy in the higher ed curriculum.

Finding international solutions for climate change

Asking ChatGPT for how to generate international cooperation

Is knowledge an emergent property of LLMs?

LLMs as a form of information compression

LLMs, creativity, and pseudo-creativity

the issue of opacity of and problems in training data

the two faces of bias in training data

Apple TV+'s Foundation series; forced forgetting

accuracy of content on wikipedia

good enough, credibility

open washing (Michelle Thorne?)

can large neural networks ever be truly open or fully understood?

can AI ever be separated from the need for lots of cash?

terminator reference

students using LLMs without being transparent about doing so

ChatGPT 4 switching to french?

Why do students not read their AI output?

Why do we settle for mediocrity and good enough?

technoskepticism: be aware of what the tool does and the implications of using it

digital natives: a myth

the layers of digital literacy