37分

A Dose of Technoskepticism, with Eric Covey T-Squared: a Teaching and Technology Podcast

    • 教育

In the second part of their conversation, Jacob and Matt continue their discussion of AI ethics with Dr. Eric Covey from Grand Valley State University’s Department of History.

To help find a truly international solution for climate change, they enlist the help of ChatGPT itself. Reading through its ideas leads to a conversation about whether large language models actually contain knowledge or whether it’s something else. Discussing knowledge leads to the question of how creativity relates to randomness, and whether LLMs can truly be creative.

The conversation turns a bit more technical as they consider the idea of information compression as a way to understand how large language models operate. The analogy helps illuminate why there are two different types of bias present in LLMs.

The episode includes a discussion of the concept of “openness” and the financial costs involved, raising the question of how “open” generative artificial intelligence models are—and whether they can ever actually be truly open.

In the final moments of the episode, Eric relates a story about how one of his students used ChatGPT and didn’t notice problems in the generated text. The group returns to some of the broad themes discussed, including the problem of what counts as “good enough”, the need for technoskepticism, as well as the importance of digital literacy in the higher ed curriculum.

In the second part of their conversation, Jacob and Matt continue their discussion of AI ethics with Dr. Eric Covey from Grand Valley State University’s Department of History.

To help find a truly international solution for climate change, they enlist the help of ChatGPT itself. Reading through its ideas leads to a conversation about whether large language models actually contain knowledge or whether it’s something else. Discussing knowledge leads to the question of how creativity relates to randomness, and whether LLMs can truly be creative.

The conversation turns a bit more technical as they consider the idea of information compression as a way to understand how large language models operate. The analogy helps illuminate why there are two different types of bias present in LLMs.

The episode includes a discussion of the concept of “openness” and the financial costs involved, raising the question of how “open” generative artificial intelligence models are—and whether they can ever actually be truly open.

In the final moments of the episode, Eric relates a story about how one of his students used ChatGPT and didn’t notice problems in the generated text. The group returns to some of the broad themes discussed, including the problem of what counts as “good enough”, the need for technoskepticism, as well as the importance of digital literacy in the higher ed curriculum.

37分

教育のトップPodcast

英語聞き流し | Sakura English/サクラ・イングリッシュ
SAKURA English School
英語で雑談!Kevin’s English Room Podcast
ケビン (Kevin's English Room)
6 Minute English
BBC Radio
Hapa英会話 Podcast
Jun Senesac: バイリンガル 英会話 & ビジネス英語 講師
TED Talks Daily
TED
【聞くだけで覚えられる 】簡単な英語表現・ 初級 | 聞き流しのリスニイング 🍎
しゃべれる英語