37분

A Dose of Technoskepticism, with Eric Covey T-Squared: a Teaching and Technology Podcast

    • 교육

In the second part of their conversation, Jacob and Matt continue their discussion of AI ethics with Dr. Eric Covey from Grand Valley State University’s Department of History.

To help find a truly international solution for climate change, they enlist the help of ChatGPT itself. Reading through its ideas leads to a conversation about whether large language models actually contain knowledge or whether it’s something else. Discussing knowledge leads to the question of how creativity relates to randomness, and whether LLMs can truly be creative.

The conversation turns a bit more technical as they consider the idea of information compression as a way to understand how large language models operate. The analogy helps illuminate why there are two different types of bias present in LLMs.

The episode includes a discussion of the concept of “openness” and the financial costs involved, raising the question of how “open” generative artificial intelligence models are—and whether they can ever actually be truly open.

In the final moments of the episode, Eric relates a story about how one of his students used ChatGPT and didn’t notice problems in the generated text. The group returns to some of the broad themes discussed, including the problem of what counts as “good enough”, the need for technoskepticism, as well as the importance of digital literacy in the higher ed curriculum.

In the second part of their conversation, Jacob and Matt continue their discussion of AI ethics with Dr. Eric Covey from Grand Valley State University’s Department of History.

To help find a truly international solution for climate change, they enlist the help of ChatGPT itself. Reading through its ideas leads to a conversation about whether large language models actually contain knowledge or whether it’s something else. Discussing knowledge leads to the question of how creativity relates to randomness, and whether LLMs can truly be creative.

The conversation turns a bit more technical as they consider the idea of information compression as a way to understand how large language models operate. The analogy helps illuminate why there are two different types of bias present in LLMs.

The episode includes a discussion of the concept of “openness” and the financial costs involved, raising the question of how “open” generative artificial intelligence models are—and whether they can ever actually be truly open.

In the final moments of the episode, Eric relates a story about how one of his students used ChatGPT and didn’t notice problems in the generated text. The group returns to some of the broad themes discussed, including the problem of what counts as “good enough”, the need for technoskepticism, as well as the importance of digital literacy in the higher ed curriculum.

37분

인기 교육 팟캐스트

6 Minute English
BBC Radio
All Ears English Podcast
Lindsay McMahon and Michelle Kaplan
Culips Everyday English Podcast
Culips English Podcast
Real English Conversations Podcast - Learn to Speak & Understand Real English with Confidence!
Real English Conversations: Amy Whitney & Curtis Davies - English Podcast
[쓸공언니] 경제 뉴스와 책 읽기
쓸공언니
요즘 것들의 사생활
요즘사