The French Philosopher

Stephanie Lehuger

Join The French Philosopher and consider me your philosophy BFF! 🤗 If you’re wondering about the meaning of life, your impact on the world, or who you truly are, you’re in the right place. Picture us chatting over a latte, exploring life’s big questions with wisdom from ancient and modern philosophers. I’m a Brooklyn-based French philosopher, speaker, and author, and as an expert in AI ethics for the European Commission, I also dive into ethics and critical thinking around AI and tech.

  1. 31/12/2025

    61. How To Outsmart Your Resolutions With Strategy So That They Stick

    Here is the scene. You swear you will go running tomorrow. Tomorrow arrives. So does rain and a croissant. Suddenly, the morning jog turns into a coffee date with yourself, starring a pastry that insists it is “self-care” on a plate. Acrasia: When Knowing Is Not DoingThe Greeks had a word for this pattern: acrasia, doing the opposite of what you know is good for you. Plato thought that if you truly understood the good, you would automatically follow it, while Aristotle pointed out that desire can simply overrule reason. You see it in everyday life: people genuinely want to stick to their health habits, sleep routines, or exercise plans, yet they quietly slide back into old patterns. It is not stupidity. It is the constant pull between quick comfort and long-term meaning. Why Resolutions Are Bad StrategyManagement thinker Richard Rumelt, in his book “Good Strategy / Bad Strategy,” explains that many “strategies” fail because they are just slogans pretending to be plans. Most New Year’s resolutions behave exactly like that: “Be healthier” or “get my life together” sound powerful but say nothing about what you will do on Tuesday at 7 p.m. There is no diagnosis, no honest “this is where I usually fall apart.” Everything rests on willpower, which tends to disappear right around the time the fridge light comes on.Acrasia loves this setup. It lives in the gap between big promises and zero structure. Turn Resolutions Into StrategyLet us take one example and keep it: “I want to exercise regularly.” Here is how strategy changes it. Diagnose the real problem Instead of “I am lazy,” try “I plan evening workouts, but by 7 p.m. I am exhausted, hungry, and my sofa is closer than the gym.” Now the issue is timing and energy, not your moral worth. Create a guiding policy for that problem Based on that diagnosis, a guiding policy could be: “Exercise early, before the day tires me out, and make it as easy as possible to start.” This gives a direction for every future choice about movement. Design tiny, coherent actions that match the policy From that policy, you might decide: lay out workout clothes by the bed every night do a ten-minute walk or short routine right after coffee, not after work on very bad mornings, at least stretch for three minutes so the habit does not break. All of these actions serve the same strategy: make morning movement simple and inevitable. Anchor that same example in your values Finally, you tie this exercise habit to something that matters: “I move in the morning because I want to hit middle age with the energy and body of someone who still gets mistaken for the intern.” It becomes about future freedom, not punishment for past croissants. A Kinder Way To FailAcrasia is not proof that you are broken. It is proof that you are human, stuck between intention and temptation. Writer Susan Sontag once put it this way: “Kindness, kindness, kindness. I want to make a New Year’s prayer, not a resolution. I’m praying for courage.” This January, instead of promising a completely new you, you could try something closer to that: a little more kindness for yourself, and just enough courage to take the next small step. So this year, if you want to outsmart your resolutions with strategy, do not make them louder. Make them smarter: one clear diagnosis, one simple guiding policy, a few tiny actions you might actually do. And if the croissant wins sometimes, let it, as long as it does not win every round.

    6 min
  2. 18/11/2025

    60. When LinkedIn gratitude feels like emotional spam

    Wow, Thanksgiving hits LinkedIn hard in the US: “I’m grateful for my boss” ; “I’m grateful for my dog” ; “I’m grateful for my favorite stapler.” I’m from Paris, and gratitude isn’t something I grew up with. Parisians are so grumpy, we’d probably roll our eyes if you smiled at us. We save our gratitude for true miracles, like getting through a family dinner without someone bringing up immigration while carving the turkey. See, in France, we don’t just say 'merci'. No, we write books about it. There’s this French anthropologist, Marcel Mauss, who explains that kindness isn’t really kindness: it’s a debt. It’s what he called the 'counter-gift'. You don’t do someone a favor, you open a tab. You think you’re just borrowing some sugar to your neighbor, and next thing you know you’re hosting their dog’s birthday, watering their plants, and pretending to care about their homemade kombucha. Japanese agree that not every “thank you” moment is pleasant. They actually invented a phrase for when gratitude feels like emotional spam: 'arigata meiwaku'. It’s that uncomfortable vibe when somebody insists on “helping” and you end up having to perform gratitude you didn’t sign up for. It’s like being forced into a gratitude hostage situation. But hey, tossing a sincere "thank you" is free, it doesn’t add calories, and sometimes it actually pleases people. So go on, throw some thank-yous out there when you really mean it. Just remember: real gratitude doesn’t need a TED Talk or a LinkedIn post. Sometimes it’s just a nod, a laugh, and moving on before things get weird. And if your “gratitude” ends up sounding more like sarcasm? That’s fine too. At least in Paris, they’ll respect you for it.

    2 min
  3. 01/05/2025

    59. Finding Meaning in Work

    Are you part of the 45% of high-skilled professionals that would trade some salary for more meaning at work? We’re all searching for that “why” behind what we do. Is it impact, growth, or just not dreading Mondays? If you’re picking a job just for the bragging rights, philosophy is here to call you out and nudge you toward what actually lights you up. 💡 If you’ve ever found yourself staring at your laptop and thinking, “Why am I really doing this?” you’re in good company. I recently sat down with Victoria Feldman for a conversation about how philosophy can help us find meaning at work, and how AI fits into the picture. 📺 Click on the image below to watch the video of the interview. 📺 Let’s start with the classics. Epicurus and the Stoics were obsessed with what makes a good life. Epicurus would say, stop chasing glitter and focus on what truly matters, like friendships. The Stoics? They’d tell you to channel your energy into what you can actually influence, not the endless swirl of things you can’t. Instead of trying to find happiness (which, let’s be honest, is a pretty daunting goal), they suggested we focus on removing pain as much as possible (much more doable, right?). It’s a bit like swapping out your bucket list for a “things I won’t tolerate” list. When it comes to technology, it’s a mixed bag. Take healthcare: I met a nurse who now uses voice memos and AI to write her reports. What used to take her two hours at the end of every shift is now automated, freeing her up for what really matters: caring for patients. On the flip side, doctors often spend more time typing into computers (mine uses only two fingers 😑) than actually looking patients in the eye. So, AI can either give us back our time for meaningful work or take us away from human relationships. I guess it’s all about how we use it. So, what’s my takeaway for you? Be clear about your values. Don’t get lost chasing every shiny title or the endless checklist of what a “perfect” job should look like. Focus on the few things that genuinely nourish you. Choose work that aligns with what matters most to you, and try to contribute to something bigger than yourself, something you can be proud of. And remember, questioning everything is not just allowed, it’s encouraged (that's what philosophy is all about). #AI #Ethics #AIEthics #Philosophy #Technology #PhilosophyBFF #TheFrenchPhilosopher #FrenchPhilosopher #meaningfulwork #career #workculture #selfreflection #wellbeing #ancientwisdom #stoicism #epicurus #mindset

    38 min
  4. 17/03/2025

    58. Does AI make better decisions than humans?

    Imagine a machine deciding who gets life-saving surgery in a split-second, armed with endless data and razor-sharp logic. No hesitation, no bias, no emotional baggage. Sounds like a dream... or does it? What do you think: does AI make better decisions than humans? Well, it’s true that there are no existential crises or coffee breaks for our robot friends. They’re brilliant at optimize outcomes by crunching numbers, without getting tired, distracted, or irrational. Some chatbots actually even give good moral advice (one could say better than some philosophers? 😅). Have a look if you’re curious: petersinger.ai. But here’s the kicker, machines don’t actually “understand” morality. Why is that? Because they don’t feel empathy or anguish when making tough calls. They don’t lose sleep over the weight of their decisions. They don’t consider the messy, lived experiences of the people affected by them. Take existentialists like Simone de Beauvoir (yes, we’re name-dropping). They’d argue that morality is rooted in freedom and authenticity, every decision we make defines who we are and carries the weight of our responsibility to others. Machines? They don’t have freedom, they’re programmed. They don’t have authenticity, they’re mimicking patterns. They’re not moral agents, they’re tools. But here’s where things get spicy. AI can actually push us to think deeper about our own ethical frameworks. By exposing our biases and presenting alternative perspectives, it can sharpen our reasoning and force us to confront uncomfortable truths. For instance, Amazon’s AI recruiting tool 10 years ago was a fiasco but it helped everyone realize how deep recruiting biases are, and that was definitely a win to make us aware that we had to fight against them. So maybe the question isn’t whether AI is “better” at morality but whether it challenges us to be better moral thinkers ourselves? Should we trust AI with big decisions? Maybe as collaborators, not captains of the ship. Machines might help us see clearer, but the messy beauty of morality, its empathy, its anguish, its humanity, is something only we can bring to the table. Or at least that’s my take… what’s yours?

    4 min
  5. 10/03/2025

    56. Schrödinger’s Cat Just Got An Upgrade!

    Word on the street is Microsoft’s latest quantum breakthrough (see Nature’s article link below) might finally let us crack open the box and see what’s really going on. But here’s the kicker: quantum computing isn’t just about faster tech or breaking encryption. It’s a philosophical mic drop. What if reality isn’t just yes or no? What if it’s yes AND no… or maybe even something else entirely? See, quantum computers don’t follow the same rules as our everyday classical computers. They thrive in the chaos, living in that weird, paradoxical space where things can be two things at once. It’s like the universe is giving us a hint that we’ve been thinking way too small all the time. Human level thinking will probably always be too small to understand it all. It doesn’t stop us for craving more anyway! While engineers are out here solving problems we didn’t even think actually had solutions, philosophers might buckle up to be ready for a world where zero and one can coexist. Where truth isn’t fixed but fluid? Where the impossible suddenly feels like it’s just around the corner? Strap in, this isn’t just science anymore. It’s a whole new way of seeing reality! How could our world react to such a weird perspective to grasp? When we see human beings kill each other for failing to see the world the same way, I’m not overly optimistic about human kind capacity to fully apprehend quantum physics. But maybe it’s fine not to understand how quantum computing works if we can benefit from it. Or is it?

    2 min
  6. 10/03/2025

    55. Where Does My Freedom End and Yours Begin?

    Freedom sounds simple—do what you want, right? But John Stuart Mill had a different take (he’s a 19th-century philosopher who spent a lot of time thinking about this, so pretty legit). He believed that liberty comes with one big condition: you’re free to do whatever you like, as long as you don’t harm others. Sounds fair enough, doesn’t it? But when you really think about it, this idea of “don’t harm others” gets complicated fast. For Mill, freedom wasn’t just about doing your own thing—it was about understanding how your actions affect the people around you. Liberty, he thought, isn’t something we keep to ourselves; it’s something we share. Now, let’s bring this into today’s world Think about all the big issues on the global stage—peace talks, climate change policies, trade negotiations. These are all about the same question Mill asked: where does my freedom end and yours begin? Can one country pursue its own goals without stepping on another’s toes? Take peace talks as an example One nation might feel justified in defending its borders or expanding its influence, while another sees those actions as threats to their sovereignty or safety. Mill would argue that true freedom doesn’t mean ignoring these tensions—it means recognizing how actions ripple outward and finding ways to address those ripples responsibly. His principle of “non-nuisance” isn’t just a moral idea—it’s a practical guide for resolving conflicts and building trust. And then there’s climate agreements One country might say, “We need more factories to grow our economy,” while another says, “Your growth is destroying our environment.” Again, Mill would remind us that freedom isn’t just about personal or national gain—it’s about understanding how interconnected we all are and making choices that respect those connections. And what about compromise? Mill believed that freedom works best when it’s built on conversation. The best solutions don’t come from one side winning and the other losing—they come from honest dialogue where both sides figure out how to move forward together. It’s not easy, but it’s how progress happens. Are we living up to Mill’s vision of freedom today? Are we using our liberties to build bridges or just digging deeper trenches? Every negotiation—whether it’s between nations or neighbors—is a chance to show whether we can balance our rights with our responsibilities to each other. Mill would remind us that freedom isn’t just about doing whatever we want—it’s about finding ways to live together without harming each other. That’s where real liberty begins. What do you think? I’d love to hear your thoughts on how Mill’s ideas apply today.

    4 min
  7. 10/03/2025

    54. Ethical AI’s Dirty Secret

    Every “trustworthy” AI system quietly betrays at least one sacred principle. Ethical AI forces brutal trade-offs: Prioritizing any one aspect among fairness, accuracy, and transparency compromises the others. It's a messy game of Jenga: pull one block (like fairness), and accuracy wobbles; stabilize transparency, and performance tumbles. But why can’t you be fair, accurate, AND transparent? And is there a solution? The Trilemma in Action Imagine you try to create ethical hiring algorithms. Prioritize diversity and you might ghost the best candidates. Obsess over qualifications and historical biases sneak in like uninvited guests. Same with chatbots. Force explanations and they’ll robot-splain every comma. Let them “think” freely? You’ll get confident lies about Elvis running a B&B on a Mars colony. Why Regulators Won’t Save Us Should we set up laws that dictate universal error thresholds or fairness metrics? Regulators wisely steer clear of rigid one-size-fits-all rules. Smart move. They acknowledge AI’s messy reality where a 3% mistake margin might be catastrophic for autonomous surgery bots but trivial for movie recommendation engines. The Path Forward? Some companies now use “ethical debt” trackers, logging trade-offs as rigorously as technical debt. They document their compromises openly, like a chef publishing rejected recipe variations alongside their final dish. Truth is: the real AI dilemma is that no AI system maximizes fairness, accuracy, and transparency simultaneously. So, what could we imagine? Letting users pick their poison with trade-off menus: “Click here for maximum fairness (slower, dumber AI)” or “Turbo mode (minor discrimination included)”? Or how about launching bias bounties: pay hackers to hunt unfairness and turn ethics into an extreme sport? Obviously, it’s complicated. The Bullet-Proof System Sorry, there’s no bullet-proof system since value conflicts will always demand context-specific sacrifices. After all, ethics isn’t about avoiding hard choices, it’s about admitting we’re all balancing on a tightrope—and inviting everyone to see the safety net we’ve woven below. Should We Hold Machines to Higher Standards Than Humans? Trustworthy AI isn’t achieved through perfect systems, but through processes that make our compromises legible, contestable, and revisable. After all, humans aren’t fair, accurate, and transparent either.

    4 min

Trailers

About

Join The French Philosopher and consider me your philosophy BFF! 🤗 If you’re wondering about the meaning of life, your impact on the world, or who you truly are, you’re in the right place. Picture us chatting over a latte, exploring life’s big questions with wisdom from ancient and modern philosophers. I’m a Brooklyn-based French philosopher, speaker, and author, and as an expert in AI ethics for the European Commission, I also dive into ethics and critical thinking around AI and tech.