16 min

Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions AI Safety Fundamentals: Alignment 201

    • Technology

Using hard multiple-choice reading comprehension questions as a testbed, we assess whether presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive. If this is helpful, we may be able to increase our justified trust in language-model-based systems by asking them to produce these arguments where needed. Previous research has shown...

Using hard multiple-choice reading comprehension questions as a testbed, we assess whether presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive. If this is helpful, we may be able to increase our justified trust in language-model-based systems by asking them to produce these arguments where needed. Previous research has shown...

16 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
Lex Fridman Podcast
Lex Fridman
The TED AI Show
TED
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Hard Fork
The New York Times
Last Week in AI
Skynet Today