An Analog Brain In A Digital Age | With Marco Ciappelli

[ Formerly Redefining Society & Technology ] An Analog Brain In A Digital Age Podcast is your backstage pass to my mind — where analog meets digital, and the occasional pig flies. In an age racing toward algorithms and automation, the best ideas still come from curiosity, experience, emotion, and the unexpected connection. What you'll find are conversations on technology & society, storytelling in all its forms, branding & marketing, creativity, and the odd surprise.

  1. MAY 4

    Book: Deep Future — Creating Technology That Matters | An Interview with Pablos Holman | An Analog Brain In A Digital Age With Marco Ciappelli

    PODCAST EPISODE | An Analog Brain In A Digital Age With Marco Ciappelli Pablos Holman has built spaceships, zapped malaria-carrying mosquitoes with a laser, earned thousands of patents, and is now betting his venture capital on the inventors Silicon Valley forgot to fund. His new book, Deep Future: Creating Technology That Matters, is a call to arms against a tech industry that got drunk on software and forgot about the other 98% of the world. 📺 Watch | 🎙️ Listen | marcociappelli.com I grew up in a city full of inventors. They just didn't call themselves that. Florence in the fifteenth century wasn't running on venture capital. It was running on curiosity, obsession, and the refusal to accept that the way things had always been done was the way they had to be done. Leonardo didn't have a manual. Galileo didn't ask for permission before pointing a better telescope at the sky. They took things apart, looked at what was inside, and put them back together differently. They hacked things. That's Pablos Holman's word — and when he used it in our conversation, I recognized it immediately. Not as a tech industry term. As something much older. A way of being in the world that says: the instructions are a suggestion, not a ceiling. Pablos has had one of those careers that resists a tidy summary. He was writing code in Alaska as a kid, with one of the first Apples ever made and nobody around to teach him anything. He figured it out on his own — and never really stopped doing that. Cryptocurrency in the '90s. AI research before anyone called it that. Helping build spaceships at Blue Origin. Then years at the Intellectual Ventures Lab with Nathan Myhrvold, going after problems Silicon Valley had decided weren't worth the trouble: a laser that identifies and destroys malaria-carrying mosquitoes in flight, hurricane suppression systems, a nuclear reactor powered by nuclear waste. Six thousand patents. Thirty million TED Talk views. Now he runs a venture fund called Deep Future, and he's written a book with the same name. The subtitle says what he thinks about most of what Silicon Valley has been doing for the past two decades. Creating Technology That Matters. He calls the alternative shallow tech. Apps that replace taxis. Apps to rent a stranger's couch. Apps to have weed delivered by drone. Not useless, exactly — but not living up to what we actually have. And what we actually have, Pablos says, is the best toolkit in all of human history: more people, more education, more resources, more raw scientific understanding than any generation before us. If all that produces another chat app, something has gone badly wrong. The number he threw out in our conversation — and I'm going to mention it here because it deserves to be mentioned, not as a hook but as a quiet scandal — is that all the software companies in the world combined, every single one of them, account for about two percent of global GDP. The other ninety-eight is energy, shipping, food, manufacturing, construction, automotive. Industries that haven't fundamentally changed in a century. Industries that software can nudge a few percent better but cannot make ten times better. Ten times better is where Pablos starts. One of his portfolio companies is building autonomous sailing cargo ships — no crew, no fuel, no emissions — targeting a two-trillion-dollar industry that currently burns half its revenue on fuel. He's also continuing the malaria work that could save half a million lives a year, half of them children under five. That's the scale he's measuring things against. We got to AI eventually, as you do. What he said landed simply and cleanly: chatting is the least important thing we can do with it. What we should be using AI for is understanding things that were previously too complex to model — what's happening in every cell of your body, how to actually get a grip on the climate, how to start solving the problems that have been resistant to every tool that came before. Instead we are using it to generate fake videos and build an AI version of TikTok. We've hit peak entertainment, he said. I think that's right. And I think what comes after peak entertainment — if anything does — is the real question sitting underneath all of this. The conversation ended the way the best ones do: not with a conclusion, but with an invitation. Pick something you care about and work on it. The people who built Apollo weren't all rocket scientists. They were cable layers and logistics coordinators who never saw the rocket up close. But they were part of something that exceeded their own individuality, and they knew it, and that was enough. That pride is still available. Whether we want it more than we want another scroll — that's on us. Deep Future: Creating Technology That Matters is out now — find it here. Subscribe to the newsletter at marcociappelli.com. Let's keep thinking. About Marco Ciappelli Marco Ciappelli is Co-Founder & CMO of ITSPmagazine, Co-Founder & Creative Director of Studio C60, Branding & Marketing Advisor, Personal Branding Coach, Journalist, Writer, and Host of An Analog Brain In A Digital Age podcast. Born in Florence, Italy, and based in Los Angeles, he explores the intersection of technology, society, storytelling, and creativity — with an analog brain, in a digital age. 🌎 marcociappelli.com | itspmagazine.com | studioc60.com About Pablos Holman Pablos Holman is a futurist, inventor, and self-described "notorious hacker" with one of the more unusual résumés in American technology. He started writing code as a kid in Alaska on one of the first Apple computers ever made, and never stopped following that thread wherever it led. In the 1990s, he worked on cryptocurrency and early AI systems before either had found their way into the mainstream. In 2001, he joined Jeff Bezos at Blue Origin, where he helped explore new approaches to space travel. He then joined Nathan Myhrvold's Intellectual Ventures Lab, a deep tech invention lab that produced over 6,000 patents — including a laser system that identifies and destroys malaria-carrying mosquitoes in flight, a machine designed to suppress hurricanes, and a nuclear reactor powered by nuclear waste. His TED talks have accumulated over 30 million views. Holman is now Managing Partner of Deep Future, a venture capital fund backing inventors working on the hard physical problems the software industry left behind — autonomous shipping, new energy systems, food technology, and manufacturing. His book, Deep Future: Creating Technology That Matters (2025), is a critique of Silicon Valley's obsession with shallow tech and an invitation to aim at the world's actual problems. 🔗 LinkedIn | deepfuture.tech/about-pablos Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    41 min
  2. APR 26

    New Book: Healing the Sick Care System — Why People Matter | An Interview with Gil Bashe | An Analog Brain In A Digital Age With Marco Ciappelli

    PODCAST EPISODE | An Analog Brain In A Digital Age With Marco Ciappelli The United States spends 18.7% of its GDP on health — two to three times what countries like Italy spend. Italy has a longer life expectancy. So what exactly are we paying for? Gil Bashe, Chair of Global Health & Purpose at FINN Partners, former combat medic, and author of Healing the Sick Care System: Why People Matter, joined me on An Analog Brain In A Digital Age to talk about what happens when a system designed to heal people forgets that people exist. This is not a rant. It's a diagnosis — from someone who has seen the system from every angle: the battlefield, the boardroom, the pharmaceutical lobby, and the bedside of his own child. 📺 Watch | 🎙️ Listen | marcociappelli.com Gil Bashe started his career as a paratrooper combat medic. He's also the father of a child with a rare disease. He spent years as a lobbyist for the pharmaceutical industry — and he'll tell you that upfront, without flinching, before explaining why he still thinks that work mattered. He has led billion-dollar global agencies, advised companies that make life-saving drugs, and sat in rooms with the CEOs of hospital systems, pharmacy chains, and insurance companies. He asked them once if they understood each other's business models. The honest answer was: no. That's the system he's writing about. Not a broken one — a fragmented one. A system where the prime customer of healthcare has become the system itself, and the actual patients have been quietly reclassified as beneficiaries. As Gil puts it: if your washing machine breaks and you call the company and they tell you you're a "beneficiary of our appliance," you'd think they were out of their minds. You paid for it. You're a customer. Treat you like one. His new book, Healing the Sick Care System: Why People Matter, was born from a long accumulation of observations — 11 or 12 years of writing about the health ecosystem from every angle — and catalyzed by one specific moment: the assassination of the UnitedHealthcare CEO, and the public reaction to it. The fact that the killer had a following. The fact that people were applauding. Gil found that more disturbing than anyone seemed comfortable admitting. When anger reaches that level, something in the system has gone deeply, fundamentally wrong. I should say: this is a conversation I had some skin in. I'm type 1 diabetic. I know what it's like to sit across from an endocrinologist who tells you things you already know, reads from a checklist, and never quite looks up from the laptop. The human element — the education, the empathy, the sense that this person actually sees you — is often just gone. And I think most doctors started their careers because they wanted to be healers. The system squeezed it out of them. Gil agrees. He says 51% of doctors now report burnout. Nearly 60% of nurses. And that's not a coincidence. That's a design failure. The AI question we kept circling was the one nobody in healthcare leadership seems to want to answer directly: if artificial intelligence takes some of the administrative burden off doctors' shoulders, does that time go back to patients — or does the system simply use it to push more throughput? More appointments per day, not more minutes per patient. Gil's framework for thinking about this is worth keeping: IQ, EQ, and TQ. Intellectual intelligence, emotional intelligence, and technology intelligence. The doctors we need going forward aren't just the ones who scored highest on their MCATs. They're the ones who can read a room. Who can hear a patient bring in a printout from WebMD and respond with curiosity instead of dismissal. Who understand that a curious patient is a gift, not an inconvenience. He told me a story from the book — one doctor who cut his wife off mid-sentence and said, "Who are you gonna believe? Me, or a patient?" And another doctor, in Santa Monica, who performed a long and complicated surgery on his daughter, walked into the hospital cafeteria in his surgical scrubs with photographs of every step of the procedure, laid them out on the table, explained everything in plain language, and then left his personal cell phone number. "Call me with any question." They did. He picked up. That's not technology. That's not policy. That's personality. And Gil's argument — which I think is correct — is that we've built a system that systematically selects against it. The hopeful part of the conversation surprised me. I expected nuance. What I got was genuine belief. We have the best trained doctors in the world. We are the source of global medical innovation. We spend enough money — the problem isn't resources, it's alignment. The fix, as Gil sees it, starts with every part of the system — payers, pharmaceutical companies, hospital systems, policy makers — looking in the mirror and asking: am I still on mission? And then, slowly, getting back to why this system was created in the first place. Healing the Sick Care System: Why People Matter is out now. Get the book here. And if this kind of conversation is what you come here for, subscribe to the newsletter at marcociappelli.com. — Marco Co-Founder ITSPmagazine & Studio C60 | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍 About Marco Marco Ciappelli is Co-Founder & CMO of ITSPmagazine, Co-Founder & Creative Director of Studio C60, Branding & Marketing Advisor, Personal Branding Coach, Journalist, Writer, and Host of An Analog Brain In A Digital Age podcast. Born in Florence, Italy, and based in Los Angeles, he explores the intersection of technology, society, storytelling, and creativity — with an analog brain, in a digital age. 🌎 marcociappelli.com About the Guest Gil Bashe is Chair of Global Health & Purpose at FINN Partners, one of the world's largest independent communications agencies. A former combat medic and paratrooper turned award-winning health communications leader, he has shaped the field across global agencies, trade associations, and private equity ventures over a 40-year career. He is a PM360 Lifetime Achievement Award recipient, named among PRWeek's Top 30 Most Influential People in Health PR, honored as an MM&M Top 10 Innovation Catalyst, and tapped by PRovoke Media as a Top 25 Innovator. He serves on the boards of the American Diabetes Association and the Marfan Foundation, and is editor-in-chief of Medika Life. Healing the Sick Care System: Why People Matter is published by Health Administration Press (February 2026). LinkedIn | Get the Book Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    37 min
  3. APR 24

    On the Internet, Nobody Knows You're Not Human — And Nobody's Asking | Written by Marco Ciappelli & Read by Tape3

    An Analog Brain In A Digital Age — A Newsletter by Marco Ciappelli On the Internet, Nobody Knows You're Not Human — And Nobody's Asking There was a moment — brief, unrepeatable — when the internet felt like a genuinely open place. No profiles. No algorithms deciding what you deserved to see. No one monetizing the fact that you existed. You showed up, you explored, you talked to strangers in other countries about things that mattered to you, and the whole thing felt less like a product and more like a discovery. Like finding a door to another dimension. There's a cartoon that captured that moment perfectly. 1993. The New Yorker. Peter Steiner. Two dogs, one at a computer, and the line that accidentally defined an entire era of the internet: "On the Internet, nobody knows you're a dog." https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog It was funny. It was also prophetic. And it was optimistic in a way we've completely forgotten how to be about the web. Anonymity as freedom. Identity as something fluid, chosen, playful. You could be anyone. You could be from anywhere. You could reinvent yourself in real time, with no one to contradict you. Then surveillance capitalism arrived and broke the party. Cookies. Behavioral profiling. The algorithmic panopticon. Suddenly everyone knew everything. You weren't a dog anymore — you were a demographic, a data point, a cluster of purchase histories and scroll patterns. The internet that promised liberation became the most precise identity-tracking machine ever built. Anonymity collapsed under the weight of monetization. Nobody knows you're a dog became everyone knows you're a dog, what breed, what you ate for breakfast, and which vet you Googled at 2am. And now we're in the third act. A Buddhist monk named Yang Mun has 2.5 million Instagram followers. He posts silent morning meditations. He has made over $300,000 since October. Three Buddhist scholars reviewed his content and confirmed: his wisdom isn't grounded in any actual scripture. It just sounds like it is. Yang Mun doesn't exist. He was built with ChatGPT, HeyGen — an AI platform that generates realistic synthetic human video, a face, eyes, a voice, moving and breathing and entirely artificial — and a handful of other tools, by a creator operating inside what's being called "Big Slop": a venture-backed industry that manufactures fake influencers, automates their posting, and scales them to millions of followers while platforms, politely, look the other way. Hat tip to Jack Brewster, whose LinkedIn post on Yang Mun is what started this thread of thought. https://www.linkedin.com/posts/jackbrewster_a-buddhist-monk-named-yang-mun-has-25-million-activity-7451268378499137537-RPB1?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAD_QZMB_jUr1316NWqo3MgG_iFVSPTfDgY The circle has closed. And inverted. We went from nobody knows you're a dog to everyone knows you're a dog to something far stranger: Nobody knows you're not human. The dog is gone. The human is optional. Here's what interests me — and it's not the outrage part, because the outrage is easy and everyone will do it. What interests me is the McLuhan part. Marshall McLuhan said it in 1964: the medium is the message. Not the content. The medium itself. The form of transmission shapes reality more than anything transmitted through it. Yang Mun's fake wisdom is almost beside the point. The scholars confirmed it's scripturally meaningless. But it sounds right — which is precisely the tell. The content was never engineered for truth. It was engineered for the platform. For the algorithm. For the engagement pattern that rewards the feeling of depth over the presence of it. The medium produced the monk. The monk is the message. And if you zoom out — which is what I keep trying to do from Florence, where the stones beneath my feet are five hundred years old and nobody around me is particularly impressed by disruption — you see something that looks less like a technology story and more like a civilization story. We built an internet that promised connection. We built AI to simulate humans. Somewhere along the way we forgot to ask whether any of it was real — or maybe we never quite got around to asking in the first place. Because here's the thing: this didn't happen slowly enough for us to develop a moral relationship with it. There was no adjustment period. No cultural processing. The fake monk didn't represent a fall from grace. It was a first contact situation. We haven't even named what's wrong yet, let alone decided whether it matters. The analog brain — slow, emotional, context-dependent, stubbornly human — is the one thing that still notices the difference between a conversation that carries weight and one that merely carries words. It's not superior in processing power. It's just that it comes from somewhere. From experience. From loss. From the specific, irreplaceable accident of having lived a particular life in a particular body in a particular place. The monk who wasn't there had none of that. And somewhere — maybe in 2.5 million people scrolling past silent meditations at 7am — some part of us already knows. Will we remember to ask? Are we ever gonna care? Let's keep exploring what it means to be human in this Hybrid Analog Digital Age. Stay imperfect, stay human. — Marco 📬 Follow the newsletter: An Analog Brain In A Digital Age ⓘ About Marco Ciappelli Co-Founder Studio C60 / ITSPmagazine | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍 Lear more about Marco Ciappelli: marcociappelli.com ⓘ About Studio C60 We help cybersecurity startups build trust-based marketing and go-to-market strategies grounded in deep product understanding and real buyer insights. With hundreds of products brought to market and deep connections in the CISO community, we know what security leaders value in vendors. Learn more at studioc60.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    10 min
  4. APR 19

    Before the Robots Run. More reflections from RSAC 2026 — The Power of the Community and the Machines We Invited In. | Written By Marco Ciappelli & Read By Tape3

    This was my twelfth RSA Conference. I know that because I remember the first one, 2012, and I've been counting ever since — not out of habit, but because each year feels like a chapter in a longer story I'm trying to read in real time. Twelve years of standing in that same building in San Francisco, watching an industry evolve, stumble, reinvent itself, and occasionally look in the mirror. In the early years it was pure technology. Cryptography, protocols, threat vectors, the architecture of defense. The conversations were technical, the energy was almost academic, the suits were slightly more formal. Then something shifted — gradually, then all at once, the way things usually do. The industry started talking about people. About culture. About the human beings sitting behind the keyboards and the very human mistakes they were making. The themes started reflecting it: community, togetherness, collective defense. Stronger Together. The Human Element. The Power of Community. Year after year, the message from the main stage was some variation of: we are more than our tools. People are what matter. Connection is the point. And then you'd walk the expo floor and see the booths. I'm not being cynical. The community is real — I've felt it, in the hallway conversations, in the side events, in the faces of people I've been running into for a decade who are genuinely trying to make the digital world safer. That part is true and it matters. But there's a growing gap between what the theme says and what the stage performs. And at RSAC 2026, that gap became impossible to ignore. Because this year, while the badge said The Power of Community, the keynotes were almost entirely about agents. Non-human ones. I wrote about this from a different angle in my first piece from RSAC — the Blade Runner angle, the NPC angle, the question of identity and intent when you can no longer tell the difference between a human action and an autonomous one. But there's another layer underneath that deserves its own space. It's the pattern. The twelve-year arc. An industry spends years — genuinely, sincerely — rediscovering the human element. Putting people at the center. Building a vocabulary around community, ethics, shared responsibility. And then, in what feels like a single conference cycle, it pivots to deploying a parallel workforce of non-human identities that outnumber us in our own systems, operate at speeds no human can follow, take actions no human directly authorized, and — here's the part that should make everyone pause — that a significant portion of organizations deploying them cannot monitor, cannot fully distinguish from human activity, and in many cases cannot stop once they're running. We built the community. Then we populated it with agents and handed them the keys. I kept thinking, walking those corridors, about the resistance. Not as a metaphor — or not only as a metaphor. In every story we've ever told about machines that gained too much autonomy, there's always a moment before the crisis where someone in the room knew. Where the warning existed. Where the design decision was made anyway because the pressure to ship, to scale, to compete was stronger than the instinct to pause. The difference between those stories and this moment is that we're not watching it happen to fictional characters. We're the ones making the design decisions. And unlike software — which you can patch, roll back, update at 3am while everyone is asleep — agents with autonomy and access are a different category of thing entirely. The old mantra of move fast and break things made a certain kind of sense when what you were breaking was a feature. It makes no sense at all when what you're deploying can act, chain consequences, and escalate — faster than any human response team can follow. This is where Asimov becomes relevant again. Not as nostalgia, not as science fiction trivia, but as a genuine design philosophy that the industry would do well to remember. His Three Laws of Robotics weren't invented as a plot device. They were a thought experiment in ethics-by-architecture — what does it look like to build the values into the system before the system runs, rather than hoping to correct the values after something goes wrong? He spent decades of stories showing that even the most carefully designed ethical constraints produce edge cases, contradictions, unintended consequences. But the point was never that ethics-by-design is perfect. The point was that without it, you don't have a fighting chance. We are, right now, at the moment before the laws get written. Some people at RSAC were saying this clearly — not from the main stage, but in the rooms and conversations where the more honest thinking tends to happen. The guardrails exist. The frameworks are being built. But they're being built while the deployment is already running, while the agents are already in the systems, while the governance structures are catching up to a reality that moved faster than the institutional response. That gap is the real story of RSAC 2026. Not the products. Not the keynote soundbites. The gap between the speed of deployment and the maturity of the thinking around what we're actually deploying. The community theme was right, actually — just not in the way the branding intended. The most important community at RSAC 2026 wasn't on the main stage. It was the quieter one: the engineers, researchers, practitioners, and security leaders who understand that we are at an inflection point, and that the decisions made in the next few years about how to design, govern, and constrain autonomous systems will matter far beyond the conference floor in San Francisco. Utopia and dystopia are not predetermined destinations. They're design outcomes. We still get to choose the architecture. But the window for making that choice thoughtfully — rather than reactively, in the middle of a crisis that moved faster than our guardrails — is not as wide as we might like to think. Asimov knew that. He wrote the laws before the robots ran. Maybe it's time we did the same. Stay imperfect, stay human. — Marco Let's keep exploring what it means to be human in this Hybrid Analog Digital Age. End of transmission. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    11 min
  5. APR 17

    Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World | Written By Marco Ciappelli & Read by Tape3

    Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World   Marco Ciappelli Co-Founder ITSPmagazine & Studio C60 | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍 April 7, 2026 This is Marco Ciappelli's Newsletter: An Analog Brain In A Digital Age. This edition draws from ITSPmagazine's on-location coverage at RSAC Conference 2026 in San Francisco. This article — and all of our RSAC Conference 2026 coverage — is made possible with the support of ITSPmagazine's RSAC 2026 sponsors: BLACKCLOAK | Crogl, Inc. | Manifest | Steel Patriot Partners | Skyhigh Security | Stellar Cyber | ESET | Token Security | Object First | Token Watch and listen to the full coverage and all of the conversations we had, including those with our sponsors, at itspmagazine.com/rsac26 Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World A new transmission from An Analog Brain In A Digital Age — formerly Musing On Society and Technology Newsletter, by Marco Ciappelli The theme of RSAC 2026 was "The Power of Community." Nearly forty-four thousand people descended on the Moscone Center in San Francisco for four days of keynotes, corridor conversations, and expo floor theater. Six hundred exhibitors. Hundreds of speakers. And one word — one concept, one obsession — that swallowed everything else whole. Not community. Agents. AI agents. Autonomous. Self-directing. Capable of taking action, accessing systems, making decisions, and — here's the part nobody says quite out loud — doing all of that while you're asleep, or in a meeting, or standing in line for a mediocre conference coffee wondering if you remembered to turn off the stove. Somewhere between the third and fourth time someone said "agentic AI" to me on that expo floor, I stopped hearing it as a technology term and started hearing it as a sound effect. A drone. A hum. Background noise for a world already running without asking for my permission. The irony of gathering tens of thousands of humans together under the banner of community, only to spend four days talking almost exclusively about non-human workers — that particular irony seemed to float unacknowledged through the air conditioning. And that's when the flashback hit me. Not to any previous RSAC. To a screen. To a world I used to inhabit in the early days of World of Warcraft — before real life staged its intervention and I decided I needed one. In those massive online worlds, NPCs wandered their scripted paths. They had names, routines, dialogue trees, purpose. They looked like characters. They acted like characters. But they weren't. They were behavior patterns wearing a face. And the experienced player learned quickly: don't trust the ones you haven't verified. The convincing ones were sometimes the most dangerous. I kept thinking about that walking those corridors. About all these agents. Already deployed, already running inside enterprise systems, already accessing sensitive data, making tool calls, chaining actions in ways their human creators didn't fully anticipate. The gap between what's been launched in pilot programs and what's actually governed, monitored, and understood is — by most accounts from the conference — vast. Most enterprises are experimenting. Very few have the infrastructure to control what they've set loose. The rest are running something close to shadow agents: identities without owners, actions without accountability, behavior patterns wearing a face. Which brings me, inevitably, to Blade Runner. Not the flying cars. Not the neon rain. The real question at the center of Ridley Scott's masterpiece — and Philip K. Dick's before it — is simpler and far more disturbing: how do you tell the difference? The Voight-Kampff test existed precisely because replicants were convincing. They behaved like humans, responded like humans, even believed they were human sometimes. The problem wasn't that they were dangerous by design. The problem was that nobody could reliably track their intent. That's not science fiction anymore. It's the central problem RSAC 2026 couldn't stop circling. A significant portion of organizations at this point cannot distinguish AI agent activity from human activity in their own environments. The security industry has built its own Voight-Kampff problem — and hasn't finished building the test. The vocabulary had shifted too, from the previous year. At Black Hat last summer, the conversation was about whether to trust agents. At RSAC 2026 it had already moved to identity. To behavior. To intent. One of the sharper ideas surfacing from the keynotes was the distinction between delegation and trusted delegation. Giving an agent a task is easy. Building the security infrastructure to actually trust that delegation — to know what the agent can touch, what it can't, what it will do when nobody is watching — that's where it gets complicated. Without it, someone on that main stage used a phrase that landed hard: a fast track to bankruptcy. Because agents don't just answer questions. They act. And some of those actions are irreversible. So the question is no longer "who are you." It's "what do you want — and do I actually know what you're capable of?" Just like a Blade Runner asking a replicant about a tortoise left in the desert sun. One researcher put it with a directness I appreciated: we need an HR view of agents. Onboarding, monitoring, offboarding. If there's no business justification for an agent's existence — remove it. Which is a pragmatic way of saying: even our digital workforce needs accountability. Even our NPCs need a character sheet. And yet the deployment keeps accelerating. Agents with access and no clear owner. Identities running at machine speed through systems built for human-paced governance. The attack surface expanding quietly while the keynote applause was still echoing in the hall. Security researchers demonstrated live that vulnerabilities in agentic ecosystems are no longer theoretical — they're being exploited, chained, moving faster than the teams tasked with stopping them. We built the agents. We gave them access. We handed them the keys and stood back saying impressive, right? — hoping nothing goes wrong. With a chatbot, you worried about the wrong answer. With an agent, you worry about the wrong action. That's not a product problem wearing a vendor badge. That's a civilization-scale question dressed up in a conference lanyard. The Blade Runner didn't just hunt replicants. He had to learn to recognize them first. We'd better start learning fast — before it gets really awkward. Like if it isn't already. Let's keep exploring what it means to be human in this Hybrid Analog Digital Age. Stay imperfect, stay human. — Marco Let's keep exploring what it means to be human in this Hybrid Analog Digital Age. End of transmission. ⓘ About Marco Ciappelli Co-Founder Studio C60 / ITSPmagazine | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍 These shows are all part of ITSPmagazine—which he co-founded with his good friend Sean Martin, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️ Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-location Lear more about Marco Ciappelli: marcociappelli.com ⓘ About Studio C60 We help cybersecurity startups build trust-based marketing and go-to-market strategies grounded in deep product understanding and real buyer insights. With hundreds of products brought to market and deep connections in the CISO community, we know what security leaders value in vendors. Learn more at studioc60.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    11 min
  6. Marketing, Brand, And Culture: Are You Paying the Silicon Valley Tax? A Conversation with Nick Richtsmeier of CultureCraft | Hosted by Marco Ciappelli

    APR 15

    Marketing, Brand, And Culture: Are You Paying the Silicon Valley Tax? A Conversation with Nick Richtsmeier of CultureCraft | Hosted by Marco Ciappelli

    **About this episode** What if everything you've been spending on digital marketing isn't an investment — but a tax? Nick Richtsmeier, founder of CultureCraft, joins Marco Ciappelli for a Brand Highlight that cuts straight to the root of why so many organizations feel stuck: not a marketing problem, but an alignment problem. Nick introduces the concept of the Silicon Valley tax — the ongoing cost most organizations pay to platforms that have no real incentive to show them what's working. He challenges the "attention economy" framing, arguing that what's actually being bought and sold is addictive behavior engineered by the algorithm. And he offers a different path: building trust in a humanist way, grounded in real alignment across culture, organizational design, positioning, point of view, and core community. The result is a conversation about brands — but really about integrity. About whether what an organization says and what it does are actually the same thing. And about why asking marketing to be the "sin eater" for every internal dysfunction is a strategy that will always come up short. **Connect with Nick Richtsmeier** [Nick Richtsmeier on LinkedIn](https://www.linkedin.com/in/nickrichtsmeier/) [CultureCraft](http://www.culturecraft.com) [CultureCraft on LinkedIn](https://www.linkedin.com/company/culturecraftconsulting/) **Connect with Marco & Studio C60** [Marco Ciappelli on LinkedIn](https://www.linkedin.com/in/marco-ciappelli) [Studio C60](https://www.studioc60.com) [ITSPmagazine](https://www.itspmagazine.com) **Keywords** brand strategy, organizational culture, trust building, marketing strategy, CultureCraft, Nick Richtsmeier, Silicon Valley tax, attention economy, algorithmic economy, brand alignment, digital marketing, humanist branding, organizational design, Trust Made Growth, sin eater marketing, brand highlight, Studio C60, ITSPmagazine, Marco Ciappelli **Want to tell your story?** [Full Length Brand Story] (https://www.studioc60.com/content-creation#full) |  [Brand Spotlight Story](https://www.studioc60.com/content-creation#spotlight) |  [Brand Highlight Story](https://www.studioc60.com/content-creation#highlight) This is a Brand Highlight — a ~5 min intro conversation spotlighting the guest and their company.  Learn more: [studioc60.com/creation#highlight](https://www.studioc60.com/creation#highlight) Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    7 min
  7. When Sci-Fi Becomes the Business Plan | A Brand Highlight Conversation with Jacob Flores, Head of Research at Type One Ventures | Hosted by Marco Ciappelli

    APR 14

    When Sci-Fi Becomes the Business Plan | A Brand Highlight Conversation with Jacob Flores, Head of Research at Type One Ventures | Hosted by Marco Ciappelli

    When Sci-Fi Becomes the Business Plan A Brand Highlight Conversation with Jacob Flores, Head of Research at Type One Ventures There is a version of investing that asks what the return will be. And then there is the version that asks what kind of future the investment makes possible. Jacob Flores, Head of Research at Type One Ventures, is working firmly in the second category. Type One Ventures takes its name from the Kardashev Scale — a framework developed by Soviet astrophysicist Nikolai Kardashev that ranks civilizations by their level of technological advancement. A Type One civilization has mastered its home planet and is beginning to extend its reach beyond it. That is the destination this firm is trying to fund. Flores, a former engineer and product manager with roughly a decade of experience across industries, leads the research function at Type One with a focus on AI, neurotech, and biotechnology. The firm's investment lens is as much philosophical as it is financial. Type One looks for platform builders — companies whose core technology can be stacked across multiple applications, cultivating new marketplaces and entirely new categories of industry. Manufacturing in space is one clear example: in microgravity, it becomes possible to grow proteins, print circuits, and develop materials that cannot be produced the same way on Earth — yet those products have immediate, tangible value back on the ground. The thesis extends well beyond orbit. Type One is also backing neurotechnology companies working to restore vision and movement for people who have lost those abilities, and longevity research aimed at extending healthy human life. Flores frames these not as moonshots for their own sake, but as the new foundation layer for an entirely new level of global industry. This is a Brand Highlight. A Brand Highlight is a ~5 minute introductory conversation designed to put a spotlight on the guest and their company. Learn more Host Marco Ciappelli, Co-Founder, ITSPmagazine Guest Jacob Flores, Head of Research, Type One Ventures Resources Type One Ventures Type One Ventures on LinkedIn Want to tell your story? Full Length Brand Story Brand Spotlight Story Brand Highlight Story Keywords: Jacob Flores, Type One Ventures, Marco Ciappelli, brand story, brand marketing, marketing podcast, brand highlight, space technology, deep tech, venture capital, multi-planetary civilization, Kardashev Scale, manufacturing in space, neurotech, longevity, AI, biotechnology, frontier technology, space investing, human longevity, platform builders Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    7 min
  8. Protecting Kids Online Since 2007 and in the Age of AI: Ben Halpert on Savvy Cyber Kids at RSAC 2026

    MAR 30

    Protecting Kids Online Since 2007 and in the Age of AI: Ben Halpert on Savvy Cyber Kids at RSAC 2026

    In this episode from RSA Conference 2026, Marco Ciappelli sits down with Ben Halpert, founder of the non-profit organization Savvy Cyber Kids, to discuss the critical intersection of child development and technology. Since its founding in 2007, Savvy Cyber Kids has been on a mission to provide parents and educators with the tools needed to guide children through the digital world. Ben explains why introducing technology too early can be detrimental to a child’s emotional preparedness and brain development, and why adult-led guidance is essential even when kids seem like "tech experts". In this conversation, we explore: The Evolution of Threats: Moving from MySpace and CRT monitors to 24/7 access via mobile devices. Early Intervention: Why the "rhyme and picture book" approach works for children as young as three to teach concepts like online aliases and stranger safety. Safe AI for Kids: Introducing a new partnership with Chaperone, a platform featuring "homework mode" and parental controls to ensure AI is a tool for learning, not a shortcut for thinking. Going Global: How the organization has expanded internationally with materials translated into Spanish, German, French, and Hebrew. About Our Guest Ben Halpert is a cybersecurity veteran with over 25 years of experience and the founder of Savvy Cyber Kids. He is dedicated to helping parents navigate the "wild" of the internet with positive, developmentally appropriate programming.   Resources Savvy Cyber Kids Website: savvycyberkids.org More RSAC 2026 Coverage: itspmagazine.com/rsac Marco's Website: Marcociappelli.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    10 min

About

[ Formerly Redefining Society & Technology ] An Analog Brain In A Digital Age Podcast is your backstage pass to my mind — where analog meets digital, and the occasional pig flies. In an age racing toward algorithms and automation, the best ideas still come from curiosity, experience, emotion, and the unexpected connection. What you'll find are conversations on technology & society, storytelling in all its forms, branding & marketing, creativity, and the odd surprise.

More From ITSPmagazine Podcasts