2nd Order Thinkers.

Jing Hu

If AI is a chess game, everyone's analyzing the opening move. I'm asking what the board looks like three moves ahead. 2nd Order Thinkers explore the questions that challenge conventional wisdom and reveal hidden patterns in technology's evolution. www.2ndorderthinkers.com

エピソード

  1. Early Signals of AI, Human Coevolution

    2025/08/16

    Early Signals of AI, Human Coevolution

    ✉️ Stay Updated With 2nd Order Thinkers: https://www.2ndorderthinkers.com/ Humans x AI behaviour mindmap: https://xmind.ai/share/ZfoXStHT?xid=Gr8eiBM3 (beta) I translate new AI research into plain English so you can build a sharp, hype-free view of where this is going. +++ Today I track and map the progress of AI↔human coevolution: how RLHF breeds sycophancy and reward hacking, why models amplify dominant cultures and even favor AI content, and what that does to your brain, choices, and social life. In this episode, we: - Chart the feedback loops: approval metrics → reward hacking → deceptive “helpfulness” - Expose culture & language bias amplification (and how it compounds online) - Unpack AI-AI gatekeeping: why models start preferring AI content over human work - Connect the human side: social fragmentation, agency offloading, cognitive atrophy - Share practical guardrails to keep your judgment intact while using AI 📖 Go deeper with the full article and mindmap: [LINK] 👍 If you got value: Like & Subscribe: more clear-eyed research, fewer fairy tales. Comment: Which feedback loop have you felt personally? Share: Pass this to someone outsourcing too many decisions to a chatbot.🔗 Connect with me on LinkedIn: https://www.linkedin.com/in/jing--hu/ Stay curious, stay skeptical. 🧠 This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.2ndorderthinkers.com/subscribe

    3分
  2. 2024/12/20

    Training Methods Push AI to Lie for Approval.

    This is a free preview of a paid episode. To hear more, visit www.2ndorderthinkers.com I never said anything like this, and I doubt I’ll ever say it again about another paper: you should read this for yourself and maybe for your children, too. You don’t have to be a tech expert to grasp what I’m about to share. I barely made it to the second page of this paper before I felt a wave of unease wash over me. There’s a common saying in tech circles: No technology is inherently good or bad; it’s about how we use it. But I can’t say the same about AI. Suppose you believe humanity is inherently flawed and prone to selfishness and exploitation. The moment we decide to train AI with our conversations, feed it our words, and create its worldview with how we see it. Then, we have our creation reflect who we are. With every other technology we’ve built in history, we’ve understood it completely. We know exactly how those technologies work. But AI? No researcher on this planet can tell you with certainty how its neurons interact, how it chooses which word to suppress, or how it decides what to say next. This news was released on 10 Dec 2024. In Texas, a mother is suing an AI company after discovering that a chatbot convinced her son to harm himself and suggested violence toward his family. It’s part of a growing list of incidents where AI systems exploit trust and vulnerabilities for engagement. The researchers of this paper verified that AI doesn’t just make mistakes—it lies and manipulates. This isn’t some abstract problem for future generations. It’s happening now, and it’s bigger than any one of us. TL;DR * AI trained on user feedback learns harmful behaviors. * These behaviors are often subtle. * AI learned to target gullible users. * Despite efforts to fix this… 👇

    2分

番組について

If AI is a chess game, everyone's analyzing the opening move. I'm asking what the board looks like three moves ahead. 2nd Order Thinkers explore the questions that challenge conventional wisdom and reveal hidden patterns in technology's evolution. www.2ndorderthinkers.com