37 min

A Dose of Technoskepticism, with Eric Covey T-Squared: a Teaching and Technology Podcast

    • Education

In the second part of their conversation, Jacob and Matt continue their discussion of AI ethics with Dr. Eric Covey from Grand Valley State University’s Department of History.

To help find a truly international solution for climate change, they enlist the help of ChatGPT itself. Reading through its ideas leads to a conversation about whether large language models actually contain knowledge or whether it’s something else. Discussing knowledge leads to the question of how creativity relates to randomness, and whether LLMs can truly be creative.

The conversation turns a bit more technical as they consider the idea of information compression as a way to understand how large language models operate. The analogy helps illuminate why there are two different types of bias present in LLMs.

The episode includes a discussion of the concept of “openness” and the financial costs involved, raising the question of how “open” generative artificial intelligence models are—and whether they can ever actually be truly open.

In the final moments of the episode, Eric relates a story about how one of his students used ChatGPT and didn’t notice problems in the generated text. The group returns to some of the broad themes discussed, including the problem of what counts as “good enough”, the need for technoskepticism, as well as the importance of digital literacy in the higher ed curriculum.

In the second part of their conversation, Jacob and Matt continue their discussion of AI ethics with Dr. Eric Covey from Grand Valley State University’s Department of History.

To help find a truly international solution for climate change, they enlist the help of ChatGPT itself. Reading through its ideas leads to a conversation about whether large language models actually contain knowledge or whether it’s something else. Discussing knowledge leads to the question of how creativity relates to randomness, and whether LLMs can truly be creative.

The conversation turns a bit more technical as they consider the idea of information compression as a way to understand how large language models operate. The analogy helps illuminate why there are two different types of bias present in LLMs.

The episode includes a discussion of the concept of “openness” and the financial costs involved, raising the question of how “open” generative artificial intelligence models are—and whether they can ever actually be truly open.

In the final moments of the episode, Eric relates a story about how one of his students used ChatGPT and didn’t notice problems in the generated text. The group returns to some of the broad themes discussed, including the problem of what counts as “good enough”, the need for technoskepticism, as well as the importance of digital literacy in the higher ed curriculum.

37 min

Top Podcasts In Education

Learning English Vocabulary
BBC Radio
The Subtle Art of Not Giving a F*ck Podcast
Mark Manson
Apprendre le chinois avec LinguaBoost
LinguaBoost
Do The Work
Do The Work
Pepp Talk Podcast
Breeny Lee
Learn English Vocabulary
Jack Radford