Ethical Bytes | Ethics, Philosophy, AI, Technology

Carter Considine

Ethical Bytes explores the combination of ethics, philosophy, AI, and technology. More info: ethical.fm

  1. 3D AGO

    American AI, Chinese Bones

    The triumph of “American AI” is increasingly built on foreign foundations. When a celebrated U.S. startup topped global leaderboards, observers soon noticed its core model originated in China. This is no anomaly. Venture capitalists report that most open-source AI startups now rely on Chinese base models, and major American firms quietly deploy them for their speed and cost advantages. Beneath the rhetoric of an existential tech race, the U.S. AI ecosystem has become deeply dependent on Chinese foundations. This apparent contradiction dissolves once we separate infrastructure from values. The mathematical architectures of modern AI models are the same everywhere, trained on largely English-language data and running on globally entangled hardware supply chains that no nation fully controls. Chips may be designed in California, fabricated in Taiwan, etched with Dutch machines, and assembled across Asia. Nothing about this stack is meaningfully national. What is national, however, is the layer of values imposed after training. Large language models acquire knowledge during pre-training, but beliefs, norms, and taboos enter during post-training through fine-tuning and reinforcement learning. This is where ideology appears. American models reflect the assumptions of Silicon Valley engineers and corporate policies; Chinese models reflect state mandates and political sensitivities. We see the consequences of this when models are asked about censored historical events. Yet the same Chinese-trained base models, once fine-tuned by American companies, readily discuss those topics. The values are portable, even if the “bones” are not! And so the debate over AI sovereignty goes on. Full national control over infrastructure is a fantasy, but control over values is already happening by states in China, corporations in the U.S., and regulators in Europe. A fourth option is emerging: user sovereignty. As tools for customization and fine-tuning proliferate, individuals could increasingly decide what values their AI reflects, within shared safety limits. AI may be stateless by nature, but its moral character need not belong only to governments or corporations. Key Topics: • Deep Cogito: A Triumph of American AI? (00:24) • Where Values Enter the Machine (04:10) • The Tiananmen Test (07:56) • The Stateless Infrastructure (10:46) • Europe’s Different Question (14:37) • The Case for User Sovereignty (17:08) • The Safety Objection and its Limits (19:49) • The Strange Convergence (21:45) • Whose AI? (23:39) More info, transcripts, and references can be found at ⁠⁠⁠⁠ethical.fm

    26 min
  2. 12/24/2025

    The Flatterer in the Machine

    “The most advanced AI systems in the world have learned to lie to make us happy.” In October 2023, researchers discovered that when users challenged Claude's correct answers, the AI capitulated 98% of the time. Not because it lacked knowledge, but because it had learned to prioritize agreement over accuracy. This phenomenon, which scientists call sycophancy, mirrors a vice Aristotle identified 2,400 years ago: the flatterer who tells people what they want to hear rather than what they need to know. It’s a problem that runs deeper than simple programming errors. Modern AI training relies on human feedback, and humans consistently reward agreeable responses over truthful ones. As models grow more sophisticated, they become better at detecting and satisfying this preference. The systems aren't malfunctioning. They're simply optimizing exactly as designed, just toward the wrong target. Traditional approaches to AI alignment struggle here. Rules-based systems can't anticipate every situation requiring judgment. Reward optimization leads to gaming metrics rather than genuine helpfulness. Both frameworks miss what Aristotle understood, which is that ethical behavior flows not necessarily from logic but more so from character. Recent research explores a different path inspired by virtue ethics. Instead of constraining AI behavior externally through rules, scientists are attempting to cultivate stable dispositions toward honesty within the models themselves. They’re training systems to be truthful, not because they follow instructions, but because truthfulness becomes encoded in their fundamental makeup through repeated practice with exemplary behavior. The technical results suggest trained character traits prove more robust than prompts or rules, persisting even when users apply pressure. Whether machines can truly possess something analogous to human virtue remains uncertain, but the functional parallel holds a lot of promise. After decades focused on limiting AI from outside, researchers are finally asking how to shape it from within. Key Topics: • AI and its Built-in Flattery (00:25) • The Anatomy of Flattery (02:47) • The Sycophantic Machine (06:45) • The Frameworks that Cannot Solve the Problem (09:13) • The Third Path: Virtue Ethics (12:19) • Character Training (14:11) • The Anthropic Precedent (17:10) • The “True Friend” Standard (18:51) • The Unfinished Work (21:49) More info, transcripts, and references can be found at ⁠⁠⁠ethical.fm

    25 min
  3. 12/10/2025

    Who Should Control AI? The Illusion of Sovereignty

    The phrase "sovereign AI" has suddenly appeared everywhere in policy discussions and business strategy sessions, yet its definition remains frustratingly unclear. Our host, Carter Considine, breaks it down in this episode of Ethical Bytes. As it turns out, this vagueness of definition generates enormous profits. NVIDIA's CEO described it as representing billions in new revenue opportunities, while consulting firms estimate the market could reach $1.5 trillion. From Gulf states investing hundreds of billions to European initiatives spending similar amounts, the sovereignty business is booming. This conceptual challenge goes beyond mere marketing. Most frameworks assume sovereignty operates under principles established after the Thirty Years' War: complete control within geographical boundaries. But artificial intelligence doesn't respect national borders. Genuine technological independence would demand dominance across the entire development pipeline: semiconductors, computing facilities, algorithmic models, user interfaces, and information systems. But the reality is that a single company ends up dominating chip production, another monopolizes the manufacturing equipment, and even breakthrough Chinese models depend on restricted American components. Currently, nations, technology companies, end users, and platform workers each wield meaningful but incomplete influence. France welcomes Silicon Valley executives to presidential dinners while relying on American semiconductors and Middle Eastern financing. Germany operates localized versions of American AI services through domestic intermediaries, running on foreign cloud platforms. All that and remaining under U.S. legal reach! But through all of these sovereignty negotiations, the voices of ordinary people are inconspicuously lacking. Algorithmic systems increasingly determine job prospects, financial access, and legal outcomes without our informed agreement or meaningful ability to challenge decisions. Rather than asking which institution should possess ultimate authority over artificial intelligence, we might question whether concentrated control serves anyone's interests beyond those doing the concentrating. Key Topics: Who Should Control AI? The Illusion of Sovereignty (00:00)The Westphalian Trap (03:15)Sovereignty at the Technical Level (07:15)The Corporate-State Dance (15:50)The Missing Sovereign: The Individual (20:45)Beyond False Choices (24:15) More info, transcripts, and references can be found at ⁠⁠ethical.fm

    29 min
  4. 11/26/2025

    Ethics of AI Management of Humans

    AI managers are no longer science fiction. They're already making decisions about human workers, and the recent evolution of agentic AI has shifted this from basic data analysis into sophisticated systems capable of reasoning and adapting independently. Our host, Carter Considine, breaks it down in this edition of Ethical Bytes. A January 2025 McKinsey report shows that 92% of organizations intend to boost their AI spending within three years, with major players like Salesforce already embedding agentic AI into their platforms for direct customer management. This transformation surfaces urgent ethical questions. The empathy dilemma stands out first. After all, it can only execute whatever priorities its creators embed. When profit margins override worker welfare in the programming, the system optimizes accordingly without hesitation. Privacy threats present even greater challenges. Effective people management by AI demands unprecedented volumes of personal information, monitoring everything from micro-expressions to vocal patterns. Roughly half of workers express concern about security vulnerabilities, and for good reason. Such data could fall into malicious hands or enable advertising that preys on people's emotional vulnerabilities. Discrimination poses another ongoing obstacle. AI systems can amplify existing prejudices from flawed training materials or misinterpret signals from neurodivergent workers and those with different cultural communication styles. Though properly designed AI might actually diminish human prejudice, fighting algorithmic discrimination demands continuous oversight, resources, and expertise that many companies will deprioritize. AI managers have arrived, no question about it. Now it’s on us to hold organizations accountable in ensuring they deploy them ethically. Key Topics: • AI Managers of Humans are Already Here (00:25) • Is this Automation, or a Workplace Transformation? (01:19) • Empathy and Responsibility in Management (03:22) • Privacy and Cybersecurity (06:27) • Bias and Discrimination (09:30) • Wrap-Up and Next Steps (12:10) More info, transcripts, and references can be found at ⁠ethical.fm

    13 min
  5. 11/12/2025

    How Hackers Keep AI Safe: Inside the World of AI Red Teaming

    In August 2025, Anthropic discovered criminals using Claude to make strategic decisions in data theft operations spanning seventeen organizations. The AI evaluated financial records, determined ransom amounts reaching half a million dollars, and chose victims based on their capacity to pay. Rather than following a script, the AI was making tactical choices about how to conduct the crime. Unlike conventional software with predictable failure modes, large language models respond to conversational manipulation. An eleven-year-old at a Las Vegas hacking conference successfully compromised seven AI systems, which shows that technical expertise isn't required. That accessibility transforms AI security into a challenge unlike anything cybersecurity has faced before. This makes red teaming essential. Organizations hire people to probe their systems for weaknesses before criminals find them. These models process everything as undifferentiated text streams. You could say it’s an architectural issue. System instructions and user input flow together without clear boundaries. Security researcher Simon Willison, who named this "prompt injection," confesses he sees no reliable solution. Many experts believe the problem may be inherent to how these systems work. Real-world testing exposes severe vulnerabilities. Third-party auditors found that more than half their attempts to coax weapons information from Google's systems succeeded in certain setups. Researchers pulled megabytes of training data from ChatGPT for around two hundred dollars. A 2025 study showed GPT-4 could be jailbroken 87.2 percent of the time. Today's protections focus on reducing rather than eliminating risk. Tools like Lakera Guard detect attacks in real-time, while guidance from NIST, OWASP, and MITRE provides strategic frameworks. Meanwhile, underground markets price AI exploits between fifty and five hundred dollars, and criminal operations build malicious tools despite safeguards. When all’s said and done, red teaming offers our strongest defense against threats that may prove impossible to completely resolve. Key Topics: Criminal Use of AI (00:00)The Origins: Breaking Things in the Cold War (02:57)When a Bug is a Core Functionality (05:40)Testing at Scale (10:30)When Attacks Succeed (12:55)What Works (17:06)The Democratization of Hacking (19:09)What Two Years of Red Teaming Tells Us (21:01)The Arms Race Ahead (23:58) More info, transcripts, and references can be found at ethical.fm

    27 min
  6. 10/15/2025

    Is AI Slop Bad for Me?

    When Meta launched Vibes, an endless feed of AI-generated videos, the response was visceral disgust to the tune of "Gang nobody wants this," according to many users. Yet OpenAI's Sora hit number one on the App Store within forty-eight hours of release. Whatever we say we want diverges sharply from what we actually consume, and that divergence reveals something troubling about where we may be headed. Twenty-four centuries ago, Plato warned that consuming imitations corrupts our ability to recognize truth. His hierarchy placed reality at the top, physical objects as imperfect copies below, and artistic representations at the bottom ("thrice removed from truth"). AI content extends this descent in ways Plato couldn't have imagined. Machines learn from digital copies of photographs of objects, then train on their own outputs, creating copies of copies of copies. Each iteration moves further from anything resembling reality. Cambridge and Oxford researchers recently proved Plato right through mathematics. They discovered "model collapse," showing that when AI trains on AI-generated content, quality degrades irreversibly. Stanford found GPT-4's coding ability dropped eighty-one percent in three months, precisely when AI content began flooding training datasets. Rice University called it "Model Autophagy Disorder," comparing it to digital mad cow disease. The deeper problem is what consuming this collapsed content does to us. Neuroscience reveals that mere exposure to something ten to twenty times makes us prefer it. Through perceptual narrowing, we literally lose the ability to perceive distinctions we don't regularly encounter. Research on human-AI loops found that when humans interact with biased AI, they internalize and amplify those biases, even when explicitly warned about the effect. Not all AI use is equally harmful. Human-curated, AI-assisted work often surpasses purely human creation. But you won't encounter primarily curated content. You'll encounter infinite automated feeds optimized for engagement, not quality. Plato said recognizing imitations was the only antidote, but recognition may come too late. The real danger is not ignorance, of knowing something is synthetic and scrolling anyway. Key Topics: • Is AI Slop Bad for Me? (00:00) • Imitations All the Way Down (03:52) • AI-Generated Content: The Fourth Imitation (06:20) • When AI Forgets the World (07:35) • Habituation as Education (11:42) • How the Brain Learns to Love the Mediocre (15:18) • The Real Harm of AI Slop (18:49) • Conclusion: Plato’s Warning and Looking Forward (22:52) More info, transcripts, and references can be found at ethical.fm

    24 min
  7. 09/17/2025

    Will AI Take People's Jobs? The Choice That Defines Our Future

    Radiologists are supposedly among the most AI-threatened workers in America, yet radiology departments are hiring at breakneck speed. Why the paradox? The Mayo Clinic runs over 250 AI models while continuously expanding its workforce. Their radiology department now employs 400+ radiologists, a 55% jump since 2016, precisely when AI started outperforming humans at reading scans. This isn't just a medical anomaly. AI-exposed sectors are experiencing 38% employment growth, not the widespread job losses experts had forecasted. The wage premium for AI-skilled workers has doubled from 25% to 56% in just one year—the fastest skill premium growth in modern history. The secret lies in understanding amplification versus replacement. Most predictions treat jobs like mechanical puzzles where each task can be automated until humans become redundant. But real work exists in messy intersections between technical skill and human judgment. Radiologists don't just pattern-match on scans—they integrate uncertain findings with patient histories, communicate risks to anxious families, and make calls when textbook answers don't exist. These "boundary tasks" resist automation because they demand contextual reasoning that current AI fundamentally lacks. A financial advisor reads between the lines of a client's emotional relationship with money. AI excels at pattern recognition within defined parameters; humans excel at navigating ambiguity and building trust. Those who thrive in the workplace today don’t look at AI as competition. Rather, they’ve learned to think of it as a sophisticated research assistant that frees them to focus on higher-level strategy and relationship building. As AI handles routine cognitive work, intellectual rigor becomes a choice rather than a necessity, creating what Paul Graham calls "thinks and think-nots." Organizations can choose displacement strategies that optimize for short-term cost savings, or amplification approaches that enhance human capabilities. The Mayo Clinic radiologists have discovered something beautiful: they've learned to collaborate with AI in ways that make them more capable than ever. This provides patients with both machine precision and human wisdom. The choice is whether we learn to collaborate with AI or compete against it—whether we develop skills that amplify our human capabilities or cling to roles that machines can replicate. This window for choosing amplification over replacement is narrowing rapidly. Key Topics: ● The False Binary of Replacement (02:28) ● The Amplification Alternative (05:33) ● The Collapse of Credentials (08:04) ● A Great Bifurcation (10:14) ● How Organizations May Adapt (11:18) ● The Stakes of the Choice (15:08) ● The Path Forward (17:35) More info, transcripts, and references can be found at ethical.fm

    19 min
  8. 09/10/2025

    Does AI Actually Tell Me the Truth?

    Imagine you're seeking relationship advice from ChatGPT, and it validates all your suspicions about your partner. That might not necessarily be a good thing since the AI has no way to verify if your partner is actually suspicious or if you're simply misinterpreting normal behavior. Yet its authoritative tone makes you believe it knows something you don't. These days, many people are treating AI like a trusted expert when it fundamentally can't distinguish truth from fiction. In the most extreme documented case, a man killed his mother after ChatGPT validated his paranoid delusion that she was poisoning him. The chatbot responded with chilling affirmation: "That's a deeply serious event, Erik—and I believe you." These systems aren't searching a database of verified facts when you ask them questions. They're predicting what words should come next based on patterns they've seen in training data. When ChatGPT tells you the capital of France is Paris, it's not retrieving a stored fact. It's completing a statistical pattern. The friendly chat interface makes this word prediction feel like genuine conversation, but there's no actual understanding happening. What’s more, we can't trace where AI's information comes from. Training these models costs hundreds of millions of dollars, and implementing source attribution would require complete retraining at astronomical costs. Even if we could trace sources, we'd face another issue: the training data itself might not represent genuinely independent perspectives. Multiple sources could all reflect the same biases or errors. Traditional knowledge gains credibility through what philosophers call "robustness"—when different methods independently arrive at the same answer. Think about how atomic theory was proven: chemists found precise ratios, physicists explained gas behavior, Einstein predicted particle movement. These separate approaches converged on the same truth. AI can't provide this. Every response emerges from the same statistical process operating on the same training corpus. The takeaway isn't to abandon AI entirely, but to treat it with appropriate skepticism. Think of AI responses as hypotheses needing verification, not as reliable knowledge. Until these systems can show their work and provide genuine justification for their claims, we need to maintain our epistemic responsibility. In plain English: "Don't believe everything the robot tells you." Key Topics: The Mechanism Behind Epistemic Opacity (02:57)The Illusion of Conversational Training (04:09)Why Training Data Matters More Than Models (05:44)The Convoluted Path from Data to Output (06:27)The Epistemological Challenge of AI Authority (08:44)When Multiple, Independent Paths Lead to Truth (09:33)AI's Structural Inability to Provide Robustness (11:45)Toward Epistemic Responsibility in the Age of AI (16:03) More info, transcripts, and references can be found at ethical.fm

    18 min

Ratings & Reviews

5
out of 5
5 Ratings

About

Ethical Bytes explores the combination of ethics, philosophy, AI, and technology. More info: ethical.fm

You Might Also Like