No Effing AIdea!

Srini and David

No Effing AIdea! is where enterprise leaders get the real story on AI adoption. Forget the hype and vendor theatre — this is the messy middle, where boards want moonshots, compliance says no, and customers push back on brilliance. Hosts Srini Annamaraju and David Royle bring 30+ years of earned scars in digital and AI transformation. Every fortnight, they cut through the noise with: The Cold Open — a provocative stat or story you should be paying attention to (AI ethics, job displacement, enterprise fraud, you name it) The Reality Check — a fast, unfiltered rundown of the last two weeks in enterprise AI, decoded for what it really means The Deep Dive — one paradox in focus, like midsize firms “too big for hacks, too lean for moonshots,” or boards demanding both ROI and revolution at once The Paradox Box — candid Q&A with execs, founders, and investors wrestling with the contradictions of AI in the enterprise It’s pragmatic, funny, and sometimes brutal. Finally, a podcast that talks about AI transformation like adults who’ve actually been there.

Episodes

  1. 5 DAYS AGO

    Ep #8: Enterprise AI Field Notes: Agents at Work, Live Demo, Guardrails + Responsible Innovation

    Hosts: Srini Annamaraju & David Royle. Guest: Ravi Ramchandran. Welcome to episode 8. AI agents are getting easier to build. That’s the exciting bit. The risky bit is that organisations can now create weak, badly governed automations before leadership has worked out what “good” actually looks like. In this episode, Ravi joins Srini and David to pull the conversation out of buzzword-land and into real work. He walks through a practical example of building an agent that turns meeting transcripts into status reports, then digs into what matters underneath: prompt discipline, guardrails, safe experimentation, risk metrics, and why handing people tools without changing operating practice is asking for trouble. The conversation moves from macro AI noise to enterprise reality. How should leaders think about the 70-20-10 split of routine, experimental, and visionary work? Where does human friction still belong? And how do you encourage innovation without creating a quiet flood of low-quality AI output across the firm? What we cover Macro AI reality check - Why the sensible middle matters more than the hype-or-panic cycle.Productivity is starting to show up - Early signs of measurable uplift are emerging, even if the landing is still messy.The 70-20-10 work model - How to cut routine work and create more room for experimentation and higher-order thinking.Innovation becomes everybody’s job - The barrier to building has dropped so far that innovation can’t stay in a corporate side room.A live agent example - Ravi demonstrates how meeting transcripts can be turned into weekly status reporting.Why prompts are not enough - One decent output is not the same as a repeatable capability.Risk metrics for the AI era - Traditional productivity measures are no longer enough.A seven-day build plan - Ravi shares a practical way to identify, scope, and build useful agents. Chapters AI noise vs real enterprise adoptionWhy productivity gains are starting to matterThe 70-20-10 model for redesigning workInnovation becomes everybody’s businessLive demo: agent for weekly status reportsPrompting, grounding, and hallucination riskGuardrails, policy, and engineering practiceRisk metrics and trust in productionA seven-day framework for useful agents Top-5 Takeaways Tools alone do not transform organisationsAgents need boundaries, not vibesAI risk is now operational riskSafe experimentation needs leadership air coverFrameworks beat random enthusiasm Who it’s for Enterprise Leaders in all functions inerested in AI adoption. Help Spread the Word Enjoyed the episode? Follow us! Template Takeaways Ravi has kindly shared these two templates he walked us through for general open access. Please feel free to download them from this Google Drive folder. https://drive.google.com/drive/folders/1yKGryaEQ4lM8hLSf1il3jrqbZj4XgHrt?usp=sharing

    55 min
  2. 15/12/2025

    Ep #7: Enterprise AI Field Notes: Live cohort for Evals Careers, AI Trust, Governance + Spicy News!

    Hosts: Srini Annamaraju & David Royle “Evals are the weak link in enterprise AI adoption.” And we say it like it is in our Maven cohort Lightning Lesson. Enrol here or see the recording - or join the waitlist for the paid 4-part course (tba): https://shorturl.at/lA9ig This episode is a proper grilling on AI Evals: what they are, why boards should care, and why “ship it now, eval it later” is how you end up with a quiet disaster. We also do a quick sweep on vendors going more “enterprise-native” (less benchmark theatre, more workflow reality). What we cover Enterprise AI news: vendors shifting from benchmarks to enterprise workflowsOpenAI’s Enterprise report highlights UiPath as the “plug-in hybrid” of automation: deterministic RPA meets GenAI via connectors (and why that blend might win)What evals actually are: accuracy, citations, groundedness, hallucinationsVendor reality: some push AI first and worry about evals later, others oversell eval tooling. Error analysis still mattersEvals as the connective tissue between value, risk, and operations. Proactive, not post-mortem-after-the-horses-boltedThe EDSO “four hats” operating model (Echo, Delta, Sigma, Omega) and why boards need the Omega translation layerMaturity and scaling: small firms can fuse hats, even one-person pods for bounded scopesAgentic future: “checker agents”, Delta agents writing eval harnesses, humans steering fleets of agentsWhy SMEs lag, and how eval expectations will percolate through supply chains Chapters 00:02 Intro: Episode 7, cold UK afternoon, messy middle of enterprise AI00:56 AI news: enterprise context is the new battleground02:45 OpenAI Enterprise report headlines10:16 UiPath, hybrid automation, and the “plug-in hybrid” analogy12:53 The grilling starts: what are evals?17:02 Is AI risk being exaggerated to sell governance tools?19:45 Evals as connective tissue, and why proactive matters21:55 The EDSO roles and what “good” looks like25:21 Maturity levels and how smaller firms scope it26:58 Checker agents and agentic operating models28:58 Business case problem: cost vs avoided disaster32:14 Evals in SMEs and supply-chain pressure33:26 Close: “survived the grilling” Takeaways Evals are not paperwork. They’re how you keep the value chain connected to operations without risk blowing up later.Don’t let vendors sell you “tooling-as-a-substitute-for-thinking.” You still need human error analysis and clear accountability.Treat EDSO as hats, not headcount. Start bounded, prove value, then scale.Evals is becoming a career lane (think “AI eval controller” the way finance has controllers).The agentic world will add “checker agents” and automated harness-writing, but humans still steer the system. Who it’s for CIOs, CDOs, CAIOs, Heads of Risk, and anyone trying to ship enterprise AI without quietly lighting their control environment on fire. Also, anyone building a real career edge around AI trust and operational quality.

    38 min
  3. 25/11/2025

    Ep #6: Enterprise AI Field Notes: Shadow AI, Fwd Deployed Eval Engrs, AI Drift, Board Governance

    Hosts: Srini Annamaraju & David Royle. “The AI bubble is the wrong fear.” The real threat sits inside your own walls: shadow AI you don’t see, boards that confuse risk aversion with risk management, and leaders trying to govern a technology they don’t actually understand. We unpack why mid-market boards are exposed, how shadow AI reveals the truth about how your org really works, and what an actually realistic 12-month AI plan looks like. And yes—why people, not models, are now the biggest AI risk vector. The conversation revolves around a recent paper that David authored, a link to the post that has the details is here. What we cover Bubble noise vs fundamentals - Valuations swing wildly, but enterprise AI maturity rises daily. We explain why it has nothing to do with the technology reshaping your org.Shadow AI as diagnosis - It’s not a tooling problem but a symptom of mismatched expectations. Boards: from passive listeners to owners - Why literacy is step zero, and why chairs need to move fast. Risk aversion trap - The boards that “get it” flip from “should we?” to “how quickly, safely, and visibly can we?” 90-day governance playbook - Inventory → Validate → Govern. Top-down vs bottom-up AI - How grassroots use cases and board-led operating models collide. 12-month reality check - You won’t be AI-first in a year. But you can be an AI-literate, AI-safe, AI-enabled organisation in 12 months. Explainability anxiety - Why boards demand transparency from AI they never asked of spreadsheets or humans. The uncomfortable truth - The biggest AI risk isn’t the model. It’s your people. Evals preview - Why audits, trust contracts, drift checks, and forward-deployed evaluators will soon be board-level concerns. Chapters AI bubble vs enterprise fundamentalsShadow AI as a symptomBoards falling behindRisk aversion vs risk management90-day governance planA realistic 12-month AI horizonThe real AI risk: peopleIntro to enterprise evals Takeaways Shadow AI is a mirror - reveals gaps in culture, process, and leadership direction, not tooling.Boards must lead, not observe - Active literacy and ownership are key.Governance is the stabiliser. Inventories, validations, guardrails, and oversight reduce drift & exposure.Explainability is contextual. Set boundaries, not magic expectations.People are the attack surface. Don't miss non-malicious misuse.12 months = foundations. Literacy, safety, and one high-value use case per function. That’s the win. Who it’s for Board members, CEOs, COOs, CIOs, CROs, and mid-market operators needing a grounded, real-world view of AI risk, governance, and organisational maturity. Help Spread the Word - Enjoyed the episode? Follow the show, leave a review, and share with a colleague grappling with shadow AI, governance gaps, or board-level AI decisions. Want to join as a guest or sponsor a future episode? Get in touch!

    37 min
  4. 04/11/2025

    Ep #5: Enterprise AI Field Notes: AI Job Shifts, Micro-Creds, Brave New Orgs, AI in SMEs, AgentOps

    Hosts: Srini Annamaraju & David Royle “AI kills jobs” is the wrong headline. The real story is structural: org pyramids flatten into diamonds, managers run fleets of agents, SMEs unlock backlogs without hiring sprees, and skills go modular with micro-credentials. We break down what changes now—and how to lead it without face-planting. What we cover Jobs vs. roles: Why the entry-level layer thins, the manager layer thickens, and how to redesign spans of control when agents do the doing.Agents on a spectrum: Start with human-in-the-loop, graduate to AgentOps. Where to set autonomy today, what to monitor, and how to keep audits, drift checks, and safety rails sane.Backlog > headcount: Use AI to attack the work you never had people for—deterministic, high-volume tasks that finally move the needle.Operational resilience: Outages and dependency chains aren’t hypotheticals. We outline layered BCP/DR for an agentic stack so one failure doesn’t cascade.Early-career paradox: Apprenticeships still matter—how to select, coach, and rotate juniors in a world with fewer traditional entry roles.Skills that rise: Cognitive prompting, judgment, people leadership—and why short, role-tied micro-credentials beat semester-long generalities.SME timing & tactics: Where mid-market buyers actually are on the curve, what to build vs. buy, and how to avoid “pilot purgatory.” Chapters Jobs headline vs ground truthFrom pyramid to diamond orgsAgents, autonomy, and HITL → AgentOpsManaging hybrid teams (humans + agents)Resilience playbook for outages and dependenciesEarly-career design: apprenticeships, reverse mentoringMicro-credentials and fast upskillingWhat SMEs should do this quarter Takeaways Jobs aren’t vanishing; roles are morphing. Plan for fewer juniors, more AI-enabled managers, explicit oversight of agent fleets.Governance is the unlock. Treat agents like teammates with performance records, audits, and clear escalation paths.Resilience is strategy. Design for failure before agents touch critical workflows.Upskill in sprints. Tie micro-credentials to roles, not buzzwords. Who it’s for Operators, CTOs/CIOs, and line leaders who need practical steps to re-shape teams, govern agentic workflows, and build real resilience—especially in SMEs. Help Spread the Word: Enjoyed the episode? Follow the show, leave a quick review, and share with a colleague wrestling with agent governance or workforce design. Interested in joining as a guest or sponsoring a future episode? Get in touch.

    37 min
  5. 28/09/2025

    Ep #2: Enterprise AI Field Notes: Traps, Hard Truths, Hallucinations, Shadow AI, and the 3 AI Rooms

    Episode 2 – Enterprise AI Field Notes: Traps, Hard Truths, Hallucinations, Shadow AI, and the 3 AI RoomsHosts Srini Annamaraju and David Royle are back for episode two – and yes, the feedback is in. Some said we were a bit too serious last time. We’ll try not to become a Sunday love songs show, but we’re working on upping the “gags per minute.” This week’s conversation covers: Listener reach & feedback: Almost 100 plays already, with listeners tuning in from the UK, US, India, and even Slovenia. The appetite is real for discussions that go beyond hype and get into the messy middle of enterprise AI.Event notes from Big Data London: A buzzing show, but still very tech-heavy. We debate whether AI conversations are stuck in the IT lane, and why that’s a problem when the real impact is business-wide.NBER study on ChatGPT usage: 700M weekly users. Surprisingly, 70% of usage is personal rather than work. Heavy skew toward under-26s. We unpack what that means for adoption inside enterprises.US tech investment in the UK: Nvidia and OpenAI committing eye-watering sums (hundreds of billions over time). A rare bit of good economic news for the UK, with national implications for jobs, productivity, and independence from US/China dominance.Enterprise field news:Citi experimenting with agentic AI for wealth advisors, using Claude and Gemini inside secure workspaces.FT analysis showing CEOs hype AI on earnings calls, but get risk-heavy and muted in regulatory filings.JLR cyberattack fallout: £3.5B revenue hit, no cyber insurance in place. Knock-on effects on suppliers and supply chain.The “three rooms” where AI decisions get made:C-suite (value, governance, risk)Technical teams (architecture, data quality, safe design)Operations (ongoing management, compliance, usage quality)Traps to avoid:The Whac-a-Mole Trap – hallucinations never disappear, they just reduce.The Origami Trap – clever prompts aren’t a moat; without guardrails, they fold fast.The IT-Only Trap – AI left to technologists will fail; business P&L owners need to lead.The Corporate DNA Trap – over-automating risks erasing what makes your org unique.Shadow AI is real: Even if companies ban AI tools, staff use them on personal devices. Risks around leakage and compliance multiply. We close with a look ahead: How frontier model labs (OpenAI, Cohere, Mistral, etc.) are approaching enterprise go-to-market.Real use cases from our own client work – what’s working, what’s not.Next steps for listeners: Got topics you’d like us to cover? Message us on LinkedIn. The more specific, the better – we’ll dig in and bring field notes to the next episode.

    31 min
  6. 11/09/2025

    Ep #1: Enterprise AI Field Notes: AI’s Messy Middle: Jobs, Hype, and Hard Choices

    Episode 1 — No Effing AIdea! - AI ethics, job disruption, GenAI “failures,” the AI bubble, India SMB adoption, coding risks, consultants falling behind, biotech breakthroughs, and AI in education — all collide in our first episode of No Effing AIdea! Welcome to the first episode. We are David Royle and Srini Annamaraju. We set the tone with a stark cold open: new Stanford data shows a 13% drop in jobs for 22–25-year-olds in AI-vulnerable roles since ChatGPT launched. Then, in our Reality Check, we unpack the last two weeks of enterprise AI news with a fresh lens: MIT’s “95% failure” GenAI claim — and why that’s too simple.Why the so-called AI bubble might actually be good for business.Reliance & Meta’s $100M JV bringing enterprise AI to India’s SMBs.AI coding tools: 30% faster, but 2x more vulnerabilities.Big consultants left behind by in-house AI adoption.Stanford’s autonomous AI lab slashing drug discovery timelines.Khan Academy’s Khanmigo AI tutor bringing hope to 180M learners worldwide. Our Deep Dive asks: why does AI suddenly get its own moral panic when cloud, ERP, and digital never did? We explore what “ethics” really means for enterprises today: How ethics shows up on the P&L — as fines, lawsuits, and PR disasters.Where ethics must live in the AI stack to avoid “governance theatre.”The trade-offs leaders underestimate — speed vs. trust, open vs. proprietary.A pragmatic three-step ethical readiness checklist for 2025. Finally, in The Paradox Box, we tackle three dilemmas from the field: Boards demanding ROI and revolution at the same time.Compliance vs. engineers in the race for velocity.Customers rebelling against brilliance. Listen in for pragmatic tactics, not theatre — and a candid take on why AI ethics isn’t an afterthought. It’s the seatbelt that lets enterprises drive faster.

    1h 5m

About

No Effing AIdea! is where enterprise leaders get the real story on AI adoption. Forget the hype and vendor theatre — this is the messy middle, where boards want moonshots, compliance says no, and customers push back on brilliance. Hosts Srini Annamaraju and David Royle bring 30+ years of earned scars in digital and AI transformation. Every fortnight, they cut through the noise with: The Cold Open — a provocative stat or story you should be paying attention to (AI ethics, job displacement, enterprise fraud, you name it) The Reality Check — a fast, unfiltered rundown of the last two weeks in enterprise AI, decoded for what it really means The Deep Dive — one paradox in focus, like midsize firms “too big for hacks, too lean for moonshots,” or boards demanding both ROI and revolution at once The Paradox Box — candid Q&A with execs, founders, and investors wrestling with the contradictions of AI in the enterprise It’s pragmatic, funny, and sometimes brutal. Finally, a podcast that talks about AI transformation like adults who’ve actually been there.