On Location With Sean Martin And Marco Ciappelli

Sean Martin, ITSPmagazine, Marco Ciappelli

Whether we are there or not, ITSPmagazine still gets the best stories. Plenty of conferences and events spark our curiosity and allow us to start conversations with some of the world's brightest minds. In-person or virtually, Sean Martin and Marco Ciappelli go on-location and sit down with them at the intersection of technology, cybersecurity, and society. Together, we discover what the synergy of these three elements means for the future of humanity.

  1. AUG 27

    From Broadcasting to AI Agents: Mark Smith on Technology's 100-Year Evolution at IBC 2025 Amsterdam | On Location Event Coverage Podcast With Sean Martin & Marco Ciappelli

    I had one of those conversations that reminded me why I'm so passionate about exploring the intersection of technology and society. Speaking with Mark Smith, a board member at IBC and co-lead of their accelerator program, I found myself transported back to my roots in communication and media studies, but with eyes wide open to what's coming next. Mark has spent over 30 years in media technology, including 23 years building Mobile World Congress in Barcelona. When someone with that depth of experience gets excited about what's happening now, you pay attention. And what's happening at IBC 2025 in Amsterdam this September is nothing short of a redefinition of how we create, distribute, and authenticate content. The numbers alone are staggering: 1,350 exhibitors across 14 halls, nearly 300 speakers, 45,000 visitors. But what struck me wasn't the scale—it's the philosophical shift happening in how we think about media production. We're witnessing television's centennial year, with the first demonstrations happening in 1925, and yet we're simultaneously seeing the birth of entirely new forms of creative expression. What fascinated me most was Mark's description of their Accelerator Media Innovation Program. Since 2019, they've run over 50 projects involving 350 organizations, creating what he calls "a safe environment" for collaboration. This isn't just about showcasing new gadgets—it's about solving real challenges that keep media professionals awake at night. In our Hybrid Analog Digital Society, the traditional boundaries between broadcaster and audience, between creator and consumer, are dissolving faster than ever. The AI revolution in media production particularly caught my attention. Mark spoke about "AI assistant agents" and "agentic AI" with the enthusiasm of someone who sees liberation rather than replacement. As he put it, "It's an opportunity to take out a lot of laborious processes." But more importantly, he emphasized that it's creating new jobs—who would have thought "AI prompter" would become a legitimate profession? This perspective challenges the dystopian narrative often surrounding AI adoption. Instead of fearing the technology, the media industry seems to be embracing it as a tool for enhanced creativity. Mark's excitement was infectious when describing how AI can remove the "boring" aspects of production, allowing creative minds to focus on what they do best—tell stories that matter. But here's where it gets really interesting from a sociological perspective: the other side of the screen. We talked about how streaming revolutionized content consumption, giving viewers unprecedented control over their experience. Yet Mark observed something I've noticed too—while the technology exists for viewers to be their own directors (choosing camera angles in sports, for instance), many prefer to trust the professional's vision. We're not necessarily seeking more control; we're seeking more relevance and authenticity. This brings us to one of the most critical challenges of our time: content provenance. In a world where anyone can create content that looks professional, how do we distinguish between authentic journalism and manufactured narratives? Mark highlighted their work on C2PA (content provenance initiative), developing tools that can sign and verify media sources, tracking where content has been manipulated. This isn't just a technical challenge—it's a societal imperative. As Mark noted, YouTube is now the second most viewed platform in the UK. When user-generated content competes directly with traditional media, we need new frameworks for understanding truth and authenticity. The old editorial gatekeepers are gone; we need technological solutions that preserve trust while enabling creativity. What gives me hope is the approach I heard from Mark and his colleagues. They're not trying to control technology's impact on society—they're trying to shape it consciously. The IBC Accelerator Program represents something profound: an industry taking responsibility for its own transformation, creating spaces for collaboration rather than competition, focusing on solving real problems rather than just building cool technology. The Google Hackfest they're launching this year perfectly embodies this philosophy. Young broadcast engineers and software developers working together on real challenges, supported by established companies like Formula E. It's not about replacing human creativity with artificial intelligence—it's about augmenting human potential with technological tools. As I wrapped up our conversation, I found myself thinking about my own journey from studying sociology of communication in a pre-internet world to hosting podcasts about our digital transformation. Technology doesn't just change how we communicate—it changes who we are as communicators, as creators, as human beings sharing stories. IBC 2025 isn't just a trade show; it's a glimpse into how we're choosing to redefine our relationship with media technology. And that choice—that conscious decision to shape rather than simply react—gives me genuine optimism about our Hybrid Analog Digital Society.

    24 min
  2. AI Confusion, Privacy Pressures, and the Search for Real Value in Cybersecurity | A Black Hat USA 2025 Conversation with Evgeniy Kharam | On Location Coverage with Sean Martin and Marco Ciappelli

    AUG 24

    AI Confusion, Privacy Pressures, and the Search for Real Value in Cybersecurity | A Black Hat USA 2025 Conversation with Evgeniy Kharam | On Location Coverage with Sean Martin and Marco Ciappelli

    This year at Black Hat USA 2025, the conversation is impossible to escape: artificial intelligence. But while every vendor claims an AI-powered edge, the real question is how organizations can separate meaningful innovation from noise. In our discussion with Evgeniy Kharam, Vice President of Cybersecurity Architecture at Herjavec Group (formerly), Chief Strategy Officer (CSO) at Discern Security, and long-time security leader and author, the theme of AI confusion takes center stage. Evgeniy notes that CISOs and security architects don’t have the time or resources to analyze what “AI” means in every product pitch. With over 4,000 vendors in the ecosystem, each layering its own flavor of AI, the burden falls on security leaders to distinguish hype from usable automation. From Gondola Pitches to AI Overload Evgeniy shares how his creative networking events—skiing, biking, and beyond—mirror the industry’s need for genuine connection and trust. Just as his “gondola pitch” builds authentic engagement, buyers want clarity and honesty from technology providers. The proliferation of AI labels, however, makes that trust harder to establish. Where AI Can Help Evgeniy highlights areas where AI can reduce friction, from vulnerability management and detection to policy writing and compliance. Yet, even here, issues such as hallucinations, privacy tradeoffs, and ethics cannot be ignored. When AI begins influencing employee monitoring or analyzing sensitive data, organizations face difficult questions about fairness, transparency, and control. The Unspoken Challenge: Surveillance and Trust As we discuss the balance between employee privacy and corporate protection, it becomes clear that AI introduces new layers of surveillance. In Europe, cultural and legal boundaries create clear separation between personal and professional lives. In North America, the lines blur, raising ethical debates that may ultimately be tested in courts. The takeaway? AI has the potential to unlock workflows that were previously too costly or complex. But without transparency, governance, and a commitment to responsible use, the “AI in everything” trend risks overwhelming the very leaders it is meant to help. ___________ Guest: Evgeniy Kharam, Chief Strategy Officer (CSO), Discern Security | On LinkedIn: https://www.linkedin.com/in/ekharam/ Hosts: Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.com Marco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com ___________ Episode Sponsors ThreatLocker: https://itspm.ag/threatlocker-r974 BlackCloak: https://itspm.ag/itspbcweb Akamai: https://itspm.ag/akamailbwc DropzoneAI: https://itspm.ag/dropzoneai-641 Stellar Cyber: https://itspm.ag/stellar-9dj3 ___________ Resources Learn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25 ITSPmagazine Webinar: What’s Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year’s Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conference Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverage Want to tell your Brand Story Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrf Want Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us ___________ KEYWORDS sean martin, marco ciappelli, evgeniy kharam, black hat usa 2025, ai, privacy, surveillance, cybersecurity, automation, governance, event coverage, on location, conference

    16 min
  3. We're Becoming Dumb and Numb": Why Black Hat 2025's AI Hype Is Killing Cybersecurity -- And Our Ability to Think | Random and Unscripted Weekly Update with Sean Martin and Marco Ciappelli

    AUG 20

    We're Becoming Dumb and Numb": Why Black Hat 2025's AI Hype Is Killing Cybersecurity -- And Our Ability to Think | Random and Unscripted Weekly Update with Sean Martin and Marco Ciappelli

    We're Becoming Dumb and Numb": Why Black Hat 2025's AI Hype Is Killing Cybersecurity -- And Our Ability to Think  Random and Unscripted Weekly Update Podcast with Sean Martin and Marco Ciappelli __________________Summary Sean and Marco dissect Black Hat USA 2025, where every vendor claimed to have "agentic AI" solutions. They expose how marketing buzzwords create noise that frustrates CISOs seeking real value. Marco references the Greek myth of Talos - an ancient AI robot that seemed invincible until one fatal flaw destroyed it - as a metaphor for today's overinflated AI promises. The discussion spirals into deeper concerns: are we becoming too dependent on AI decision-making? They warn about echo chambers, lowest common denominators, and losing our ability to think critically. The solution? Stop selling perfection, embrace product limitations, and keep humans in control.   __________________10 Notable Quotes Sean:"It's hard for them to siphon the noise. Sift through the noise, I should say, and figure out what the heck is really going on.""If we completely just use it for the easy button, we'll stop thinking and we won't use it as a tool to make things better.""We'll stop thinking and we won't use it as a tool to make our minds better, to make our decisions better.""We are told then that this is the reality. This is what good looks like.""Maybe there's a different way to even look at things. So it's kind of become uniform... a very low common denominator that is just good enough for everybody."Marco:"Do you really wanna trust the weapon to just go and shoot everybody? At least you can tell it's a human factor and that's the people that ultimately decide.""If we don't make decision anymore, we're gonna turn out in a lot of those sci-fi stories, like the time machine where we become dumb.""We all perceive reality to be different from what it is, and then it creates a circular knowledge learning where we use AI to create the knowledge, then to ask the question, then to give the answers.""We're just becoming dumb and numb. More than dumb, but we become numb to everything else because we're just not thinking with our own head.""You're selling the illusion of security and that could be something that then you replicate in other industries."  Picture this: You walk into the world's largest cybersecurity conference, and every single vendor booth is screaming the same thing – "agentic AI." Different companies, different products, but somehow they all taste like the same marketing milkshake. That's exactly what Sean Martin and Marco Ciappelli witnessed at Black Hat USA 2025, and their latest Random and Unscripted with Sean and Marco episode pulls no punches in exposing what's really happening behind the buzzwords. "Marketing just took all the cool technology that each vendor had, put it in a blender and made a shake that just tastes the same," Marco reveals on Random and Unscripted with Sean and Marco, describing how the conference floor felt like one giant echo chamber where innovation got lost in translation. But this isn't just another rant about marketing speak. The Random and Unscripted with Sean and Marco conversation takes a darker turn when Marco introduces the ancient Greek myth of Talos – a bronze giant powered by divine ichor who was tasked with autonomously defending Crete. Powerful, seemingly invincible, until one small vulnerability brought the entire system crashing down. Sound familiar? "Do you really wanna trust the weapon to just go and shoot everybody?" Marco asks, drawing parallels between ancient mythology and today's rush to hand over decision-making to AI systems we don't fully understand. Sean, meanwhile, talked to frustrated CISOs throughout the event who shared a common complaint: "It's hard for them to sift through the noise and figure out what the heck is really going on." When every vendor claims their AI is autonomous and perfect, how do you choose? How do you even know what you're buying? The real danger, they argue on Random and Unscripted with Sean and Marco, isn't just bad purchasing decisions. It's what happens when we stop thinking altogether. "If we completely just use it for the easy button, we'll stop thinking and we won't use it as a tool to make our minds better," Sean warns. We risk settling for what he calls the "lowest common denominator" – a world where AI tells us what success looks like, and we never question whether we could do better. Marco goes even further, describing a "circular knowledge learning" trap where "we use AI to create the knowledge, then to ask the question, then to give the answers." The result? "We're just becoming dumb and numb. More than dumb, but we become numb to everything else because we're just not thinking with our own head." Their solution isn't to abandon AI – it's to get honest about what it can and can't do. "Stop looking for the easy button and stop selling the easy button," Marco urges vendors on Random and Unscripted with Sean and Marco. "Your product is probably as good as it is." Sean adds: "Don't be afraid to share your blemishes, share your weaknesses. Share your gaps." Because here's the thing CISOs know that vendors often forget: "CISOs are not stupid. They talk to each other. The truth will come out." In an industry built on protecting against deception, maybe it's time to stop deceiving ourselves about what AI can actually deliver.   ________________ Keywords cybersecurity, artificialintelligence, blackhat2025, agentic, ai, marketing, ciso, cybersec, infosec, technology, leadership, vendor, innovation, automation, security, tech, AI, machinelearning, enterprise, business ________________Hosts links: 📌 Marco Ciappelli: https://www.marcociappelli.com 📌 Sean Martin: https://www.seanmartin.com

    28 min
  4. The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes Optional | Reflections from Black Hat USA 2025 on the Marketing That Chose Fiction Over Facts

    AUG 19

    The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes Optional | Reflections from Black Hat USA 2025 on the Marketing That Chose Fiction Over Facts

    ⸻ Podcast: Redefining Society and Technology https://redefiningsocietyandtechnologypodcast.com  _____________________________ This Episode’s Sponsors BlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach. BlackCloak:  https://itspm.ag/itspbcweb _____________________________ A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3 August 18, 2025 The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes Optional Reflections from Black Hat USA 2025 on Deception, Disinformation, and the Marketing That Chose Fiction Over Facts By Marco Ciappelli Sean Martin, CISSP just published his analysis of Black Hat USA 2025, documenting what he calls the cybersecurity vendor "echo chamber." Reviewing over 60 vendor announcements, Sean found identical phrases echoing repeatedly: "AI-powered," "integrated," "reduce analyst burden." The sameness forces buyers to sift through near-identical claims to find genuine differentiation. This reveals more than a marketing problem—it suggests that different technologies are being fed into the same promotional blender, possibly a generative AI one, producing standardized output regardless of what went in. When an entire industry converges on identical language to describe supposedly different technologies, meaningful technical discourse breaks down. But Sean's most troubling observation wasn't about marketing copy—it was about competence. When CISOs probe vendor claims about AI capabilities, they encounter vendors who cannot adequately explain their own technologies. When conversations moved beyond marketing promises to technical specifics, answers became vague, filled with buzzwords about proprietary algorithms. Reading Sean's analysis while reflecting on my own Black Hat experience, I realized we had witnessed something unprecedented: an entire industry losing the ability to distinguish between authentic capability and generated narrative—precisely as that same industry was studying external "narrative attacks" as an emerging threat vector. The irony was impossible to ignore. Black Hat 2025 sessions warned about AI-generated deepfakes targeting executives, social engineering attacks using scraped LinkedIn profiles, and synthetic audio calls designed to trick financial institutions. Security researchers documented how adversaries craft sophisticated deceptions using publicly available content. Meanwhile, our own exhibition halls featured countless unverifiable claims about AI capabilities that even the vendors themselves couldn't adequately explain. But to understand what we witnessed, we need to examine the very concept that cybersecurity professionals were discussing as an external threat: narrative attacks. These represent a fundamental shift in how adversaries target human decision-making. Unlike traditional cyberattacks that exploit technical vulnerabilities, narrative attacks exploit psychological vulnerabilities in human cognition. Think of them as social engineering and propaganda supercharged by AI—personalized deception at scale that adapts faster than human defenders can respond. They flood information environments with false content designed to manipulate perception and erode trust, rendering rational decision-making impossible. What makes these attacks particularly dangerous in the AI era is scale and personalization. AI enables automated generation of targeted content tailored to individual psychological profiles. A single adversary can launch thousands of simultaneous campaigns, each crafted to exploit specific cognitive biases of particular groups or individuals. But here's what we may have missed during Black Hat 2025: the same technological forces enabling external narrative attacks have already compromised our internal capacity for truth evaluation. When vendors use AI-optimized language to describe AI capabilities, when marketing departments deploy algorithmic content generation to sell algorithmic solutions, when companies building detection systems can't detect the artificial nature of their own communications, we've entered a recursive information crisis. From a sociological perspective, we're witnessing the breakdown of social infrastructure required for collective knowledge production. Industries like cybersecurity have historically served as early warning systems for technological threats—canaries in the coal mine with enough technical sophistication to spot emerging dangers before they affect broader society. But when the canary becomes unable to distinguish between fresh air and poison gas, the entire mine is at risk. This brings us to something the literary world understood long before we built our first algorithm. Jorge Luis Borges, the Argentine writer, anticipated this crisis in his 1940s stories like "On Exactitude in Science" and "The Library of Babel"—tales about maps that become more real than the territories they represent and libraries containing infinite books, including false ones. In his fiction, simulations and descriptions eventually replace the reality they were meant to describe. We're living in a Borgesian nightmare where marketing descriptions of AI capabilities have become more influential than actual AI capabilities. When a vendor's promotional language about their AI becomes more convincing than a technical demonstration, when buyers make decisions based on algorithmic marketing copy rather than empirical evidence, we've entered that literary territory where the map has consumed the landscape. And we've lost the ability to distinguish between them. The historical precedent is the 1938 War of the Worlds broadcast, which created mass hysteria from fiction. But here's the crucial difference: Welles was human, the script was human-written, the performance required conscious participation, and the deception was traceable to human intent. Listeners had to actively choose to believe what they heard. Today's AI-generated narratives operate below the threshold of conscious recognition. They require no active participation—they work by seamlessly integrating into information environments in ways that make detection impossible even for experts. When algorithms generate technical claims that sound authentic to human evaluators, when the same systems create both legitimate documentation and marketing fiction, we face deception at a level Welles never imagined: the algorithmic manipulation of truth itself. The recursive nature of this problem reveals itself when you try to solve it. This creates a nearly impossible situation. How do you fact-check AI-generated claims about AI using AI-powered tools? How do you verify technical documentation when the same systems create both authentic docs and marketing copy? When the tools generating problems and solving problems converge into identical technological artifacts, conventional verification approaches break down completely. My first Black Hat article explored how we risk losing human agency by delegating decision-making to artificial agents. But this goes deeper: we risk losing human agency in the construction of reality itself. When machines generate narratives about what machines can do, truth becomes algorithmically determined rather than empirically discovered. Marshall McLuhan famously said "We shape our tools, and thereafter they shape us." But he couldn't have imagined tools that reshape our perception of reality itself. We haven't just built machines that give us answers—we've built machines that decide what questions we should ask and how we should evaluate the answers. But the implications extend far beyond cybersecurity itself. This matters far beyond. If the sector responsible for detecting digital deception becomes the first victim of algorithmic narrative pollution, what hope do other industries have? Healthcare systems relying on AI diagnostics they can't explain. Financial institutions using algorithmic trading based on analyses they can't verify. Educational systems teaching AI-generated content whose origins remain opaque. When the industry that guards against deception loses the ability to distinguish authentic capability from algorithmic fiction, society loses its early warning system for the moment when machines take over truth construction itself. So where does this leave us? That moment may have already arrived. We just don't know it yet—and increasingly, we lack the cognitive infrastructure to find out. But here's what we can still do: We can start by acknowledging we've reached this threshold. We can demand transparency not just in AI algorithms, but in the human processes that evaluate and implement them. We can rebuild evaluation criteria that distinguish between technical capability and marketing narrative. And here's a direct challenge to the marketing and branding professionals reading this: it's time to stop relying on AI algorithms and data optimization to craft your messages. The cybersecurity industry's crisis should serve as a warning—when marketing becomes indistinguishable from algorithmic fiction, everyone loses. Social media has taught us that the most respected brands are those that choose honesty over hype, transparency over clever messaging. Brands that walk the walk and talk the talk, not those that let machines do the talking. The companies that will survive this epistemological crisis are those whose marketing teams become champions of truth rather than architects of confusion. When your audience can no longer distinguish between human insight and machine-generated claims, authentic communication becomes your competitive advantage. Most importantly, we can remember that the goal was never to build machines that think for us, but machines that help us think be

    14 min
  5. When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore? | A Black Hat USA 2025 Recap | A Musing On the Future of Cybersecurity with Sean Martin and TAPE3 | Read by TAPE3

    AUG 15

    When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore? | A Black Hat USA 2025 Recap | A Musing On the Future of Cybersecurity with Sean Martin and TAPE3 | Read by TAPE3

    At Black Hat USA 2025, artificial intelligence wasn’t the shiny new thing — it was the baseline. Nearly every product launch, feature update, and hallway conversation had an “AI-powered” stamp on it. But when AI becomes the lowest common denominator for security, the questions shift. In this episode, I read my latest opinion piece exploring what happens when the tools we build to protect us are the same ones that can obscure reality — or rewrite it entirely. Drawing from the Lock Note discussion, Jennifer Granick’s keynote on threat modeling and constitutional law, my own CISO hallway conversations, and a deep review of 60+ vendor announcements, I examine the operational, legal, and governance risks that emerge when speed and scale take priority over transparency and accountability. We talk about model poisoning — not just in the technical sense, but in how our industry narrative can get corrupted by hype and shallow problem-solving. We look at the dangers of replacing entry-level security roles with black-box automation, where a single model misstep can cascade into thousands of bad calls at machine speed. And yes, we address the potential liability for CISOs and executives who let it happen without oversight. Using Mikko Hyppönen’s “Game of Tetris” metaphor, I explore how successes vanish quietly while failures pile up for all to see — and why in the AI era, that stack can build faster than ever. If AI is everywhere, what defines the premium layer above the baseline? How do we ensure we can still define success, measure it accurately, and prove it when challenged? Listen in, and then join the conversation: Can you trust the “reality” your systems present — and can you prove it? ________ This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence. Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn. Sincerely, Sean Martin and TAPE3 ________ ✦ Resources Article: When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore?https://www.linkedin.com/pulse/when-artificial-intelligence-becomes-baseline-we-even-martin-cissp-4idqe/ The Future of Cybersecurity Article: How Novel Is Novelty? Security Leaders Try To Cut Through the Cybersecurity Vendor Echo Chamber at Black Hat 2025: https://www.linkedin.com/pulse/how-novel-novelty-security-leaders-try-cut-through-sean-martin-cissp-xtune/ Black Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEA Learn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25 Article: When Virtual Reality Is A Commodity, Will True Reality Come At A Premium? https://sean-martin.medium.com/when-virtual-reality-is-a-commodity-will-true-reality-come-at-a-premium-4a97bccb4d72 Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverage ITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/ ITSPmagazine Webinar: What’s Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year’s Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conference ________ Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️ Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-location To learn more about Sean, visit his personal website.

    6 min
  6. From Fish Tanks to AI Agents: Why the Words “We’re Secure” Means Nothing Without Proof | A Black Hat USA 2025 Conversation with Viktor Petersson | On Location Coverage with Sean Martin and Marco Ciappelli

    AUG 11

    From Fish Tanks to AI Agents: Why the Words “We’re Secure” Means Nothing Without Proof | A Black Hat USA 2025 Conversation with Viktor Petersson | On Location Coverage with Sean Martin and Marco Ciappelli

    When security becomes more than a checkbox, the conversation shifts from “how much” to “how well.” At Black Hat USA 2025, Sean Martin, CISSP, Co-Founder of ITSPmagazine, and Viktor Petersson, Founder of an SBOM artifact platform, unpack how regulatory forces, cultural change, and AI innovation are reshaping how organizations think about security. Viktor points to the growing role of Software Bill of Materials (SBOMs) as not just a best practice, but a likely requirement in future compliance frameworks. The shift, he notes, is driven largely by regulation—especially in Europe—where security is no longer a “nice to have” but a mandated operational function. Sean connects this to a market reality: companies increasingly see transparent security practices as a competitive differentiator, though the industry still struggles with the hollow claim of simply being “secure.” AI naturally dominates discussions, but the focus is nuanced. Rather than chasing hype, both stress the need for strong guardrails before scaling AI-driven development. Viktor envisions engineers supervising fleets of specialized AI agents—handling tasks from UX to code auditing—while Sean sees AI as a way to rethink entire operational models. Yet both caution that without foundational security practices, AI only amplifies existing risks. The conversation extends to IoT and supply chain security, where market failures allow insecure, end-of-life devices to persist in critical environments. The infamous “smart fish tank” hack in a Las Vegas casino serves as a reminder: the weakest link often isn’t the target itself, but the entry point it provides. DEFCON, Viktor notes, offers a playground for challenging assumptions—whether it’s lock-picking to illustrate perceived versus actual security, or examining the human factor in breaches. For both hosts, events like Black Hat and DEFCON aren’t just about the latest vulnerabilities or flashy demos—they’re about the human exchange of ideas, the reframing of problems, and the collaboration that fuels more resilient security strategies. ___________ Guest: Viktor Petersson, Founder, sbomify | On LinkedIn: https://www.linkedin.com/in/vpetersson/ Hosts: Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.com Marco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com ___________ Episode Sponsors ThreatLocker: https://itspm.ag/threatlocker-r974 BlackCloak: https://itspm.ag/itspbcweb Akamai: https://itspm.ag/akamailbwc DropzoneAI: https://itspm.ag/dropzoneai-641 Stellar Cyber: https://itspm.ag/stellar-9dj3 ___________ Resources Learn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25 ITSPmagazine Webinar: What’s Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year’s Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conference Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverage Want to tell your Brand Story Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrf Want Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us ___________ KEYWORDS black hat usa 2025, sean martin, viktor petersson, sbom, compliance, ai, guardrails, iot, defcon, regulation, event coverage, on location, conference

    27 min
  7. The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for Ourselves | Reflections from Black Hat USA 2025 on the Latest Tech Salvation Narrative | A Musing On Society & Technology Newsletter

    AUG 10

    The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for Ourselves | Reflections from Black Hat USA 2025 on the Latest Tech Salvation Narrative | A Musing On Society & Technology Newsletter

    ⸻ Podcast: Redefining Society and Technology https://redefiningsocietyandtechnologypodcast.com  _____________________________ This Episode’s Sponsors BlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach. BlackCloak:  https://itspm.ag/itspbcweb _____________________________ A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3 August 9, 2025 The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for Ourselves Reflections from Black Hat USA 2025 on the Latest Tech Salvation Narrative Walking the floors of Black Hat USA 2025 for what must be the 10th or 11th time as accredited media—honestly, I've stopped counting—I found myself witnessing a familiar theater. The same performance we've seen play out repeatedly in cybersecurity: the emergence of a new technological messiah promising to solve all our problems. This year's savior? Agentic AI. The buzzword echoes through every booth, every presentation, every vendor pitch. Promises of automating 90% of security operations, platforms for autonomous threat detection, agents that can investigate novel alerts without human intervention. The marketing materials speak of artificial intelligence that will finally free us from the burden of thinking, deciding, and taking responsibility. It's Talos all over again. In Greek mythology, Hephaestus forged Talos, a bronze giant tasked with patrolling Crete's shores, hurling boulders at invaders without human intervention. Like contemporary AI, Talos was built to serve specific human ends—security, order, and control—and his value was determined by his ability to execute these ends flawlessly. The parallels to today's agentic AI promises are striking: autonomous patrol, threat detection, automated response. Same story, different millennium. But here's what the ancient Greeks understood that we seem to have forgotten: every artificial creation, no matter how sophisticated, carries within it the seeds of its own limitations and potential dangers. Industry observers noted over a hundred announcements promoting new agentic AI applications, platforms or services at the conference. That's more than one AI agent announcement per hour. The marketing departments have clearly been busy. But here's what baffles me: why do we need to lie to sell cybersecurity? You can give away t-shirts, dress up as comic book superheroes with your logo slapped on their chests, distribute branded board games, and pretend to be a sports team all day long—that's just trade show theater, and everyone knows it. But when marketing pushes past the limits of what's even believable, when they make claims so grandiose that their own engineers can't explain them, something deeper is broken. If marketing departments think CISOs are buying these lies, they have another thing coming. These are people who live with the consequences of failed security implementations, who get fired when breaches happen, who understand the difference between marketing magic and operational reality. They've seen enough "revolutionary" solutions fail to know that if something sounds too good to be true, it probably is. Yet the charade continues, year after year, vendor after vendor. The real question isn't whether the technology works—it's why an industry built on managing risk has become so comfortable with the risk of overselling its own capabilities. Something troubling emerges when you move beyond the glossy booth presentations and actually talk to the people implementing these systems. Engineers struggle to explain exactly how their AI makes decisions. Security leaders warn that artificial intelligence might become the next insider threat, as organizations grow comfortable trusting systems they don't fully understand, checking their output less and less over time. When the people building these systems warn us about trusting them too much, shouldn't we listen? This isn't the first time humanity has grappled with the allure and danger of artificial beings making decisions for us. Mary Shelley's Frankenstein, published in 1818, explored the hubris of creating life—and intelligence—without fully understanding the consequences. The novel raises the same question we face today: what are humans allowed to do with this forbidden power of creation? The question becomes more pressing when we consider what we're actually delegating to these artificial agents. It's no longer just pattern recognition or data processing—we're talking about autonomous decision-making in critical security scenarios. Conference presentations showcased significant improvements in proactive defense measures, but at what cost to human agency and understanding? Here's where the conversation jumps from cybersecurity to something far more fundamental: what are we here for if not to think, evaluate, and make decisions? From a sociological perspective, we're witnessing the construction of a new social reality where human agency is being systematically redefined. Survey data shared at the conference revealed that most security leaders feel the biggest internal threat is employees unknowingly giving AI agents access to sensitive data. But the real threat might be more subtle: the gradual erosion of human decision-making capacity as a social practice. When we delegate not just routine tasks but judgment itself to artificial agents, we're not just changing workflows—we're reshaping the fundamental social structures that define human competence and authority. We risk creating a generation of humans who have forgotten how to think critically about complex problems, not because they lack the capacity, but because the social systems around them no longer require or reward such thinking. E.M. Forster saw this coming in 1909. In "The Machine Stops," he imagined a world where humanity becomes completely dependent on an automated system that manages all aspects of life—communication, food, shelter, entertainment, even ideas. People live in isolation, served by the Machine, never needing to make decisions or solve problems themselves. When someone suggests that humans should occasionally venture outside or think independently, they're dismissed as primitive. The Machine has made human agency unnecessary, and humans have forgotten they ever possessed it. When the Machine finally breaks down, civilization collapses because no one remembers how to function without it. Don't misunderstand me—I'm not a Luddite. AI can and should help us manage the overwhelming complexity of modern cybersecurity threats. The technology demonstrations I witnessed showed genuine promise: reasoning engines that understand context, action frameworks that enable response within defined boundaries, learning systems that improve based on outcomes. The problem isn't the technology itself but the social construction of meaning around it. What we're witnessing is the creation of a new techno-social myth—a collective narrative that positions agentic AI as the solution to human fallibility. This narrative serves specific social functions: it absolves organizations of the responsibility to invest in human expertise, justifies cost-cutting through automation, and provides a technological fix for what are fundamentally organizational and social problems. The mythology we're building around agentic AI reflects deeper anxieties about human competence in an increasingly complex world. Rather than addressing the root causes—inadequate training, overwhelming workloads, systemic underinvestment in human capital—we're constructing a technological salvation narrative that promises to make these problems disappear. Vendors spoke of human-machine collaboration, AI serving as a force multiplier for analysts, handling routine tasks while escalating complex decisions to humans. This is a more honest framing: AI as augmentation, not replacement. But the marketing materials tell a different story, one of autonomous agents operating independently of human oversight. I've read a few posts on LinkedIn and spoke with a few people myself who know this topic way better than me, but I get that feeling too. There's a troubling pattern emerging: many vendor representatives can't adequately explain their own AI systems' decision-making processes. When pressed on specifics—how exactly does your agent determine threat severity? What happens when it encounters an edge case it wasn't trained for?—answers become vague, filled with marketing speak about proprietary algorithms and advanced machine learning. This opacity is dangerous. If we're going to trust artificial agents with critical security decisions, we need to understand how they think—or more accurately, how they simulate thinking. Every machine learning system requires human data scientists to frame problems, prepare data, determine appropriate datasets, remove bias, and continuously update the software. The finished product may give the impression of independent learning, but human intelligence guides every step. The future of cybersecurity will undoubtedly involve more automation, more AI assistance, more artificial agents handling routine tasks. But it should not involve the abdication of human judgment and responsibility. We need agentic AI that operates with transparency, that can explain its reasoning, that acknowledges its limitations. We need systems designed to augment human intelligence, not replace it. Most importantly, we need to resist the seductive narrative that technology alone can solve problems that are fundamentally human in nature. The prevailing logic that tech fixes tech, and that AI will fix AI, is deeply unsettling. It's a recursive delusion that takes us further away from human wisdom and closer to a world where we've forgotten that the most important p

    17 min
  8. How Novel Is Novelty? Security Leaders Try To Cut Through the Cybersecurity Vendor Echo Chamber | Reflections from Black Hat USA 2025 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE3 | Read by TAPE3

    AUG 10

    How Novel Is Novelty? Security Leaders Try To Cut Through the Cybersecurity Vendor Echo Chamber | Reflections from Black Hat USA 2025 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE3 | Read by TAPE3

    Black Hat 2025 was a showcase of cybersecurity innovation — or at least, that’s how it appeared on the surface. With more than 60 vendor announcements over the course of the week, the event floor was full of “AI-powered” solutions promising to integrate seamlessly, reduce analyst fatigue, and transform SOC operations. But after walking the floor, talking with CISOs, and reviewing the press releases, a pattern emerged: much of the messaging sounded the same, making it hard to distinguish the truly game-changing from the merely loud. In this episode of The Future of Cybersecurity Newsletter, I take you behind the scenes to unpack the themes driving this year’s announcements. Yes, AI dominated the conversation, but the real story is in how vendors are (or aren’t) connecting their technology to the operational realities CISOs face every day. I share insights gathered from private conversations with security leaders — the unfiltered version of how these announcements are received when the marketing gloss is stripped away. We dig into why operational relevance, clarity, and proof points matter more than ever. If you can’t explain what your AI does, what data it uses, and how it’s secured, you’re already losing the trust battle. For CISOs, I outline practical steps to evaluate vendor claims quickly and identify solutions that align with program goals, compliance needs, and available resources. And for vendors, this episode serves as a call to action: cut the fluff, be transparent, and frame your capabilities in terms of measurable program outcomes. I share a framework for how to break through the noise — not just by shouting louder, but by being more real, more specific, and more relevant to the people making the buying decisions. Whether you’re building a security stack or selling into one, this conversation will help you see past the echo chamber and focus on what actually moves the needle. ________ This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence. Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn. Sincerely, Sean Martin and TAPE3 ________ ✦ Resources Black Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEA ITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/ ITSPmagazine Webinar: What’s Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year’s Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conference Learn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25 Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverage Citations: Available in the full article ________ Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️ Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-location To learn more about Sean, visit his personal website.

    12 min

About

Whether we are there or not, ITSPmagazine still gets the best stories. Plenty of conferences and events spark our curiosity and allow us to start conversations with some of the world's brightest minds. In-person or virtually, Sean Martin and Marco Ciappelli go on-location and sit down with them at the intersection of technology, cybersecurity, and society. Together, we discover what the synergy of these three elements means for the future of humanity.