TechTime with Nathan Mumm

Nathan Mumm

You can grab your weekly technology without having to geek out on TechTime with Nathan Mumm. The Technology Show for your commute, exercise, or drinking fun. Listen to the best 60 minutes of Technology News and Information in a segmented format while sipping a little Whiskey on the side. We cover Top Tech Stories with a funny spin, with information that will make you go Hmmm. Listen once a week and stay up-to-date on technology in the world without getting into the weeds. This Broadcast style format is perfect for the everyday person wanting a quick update on technology, with two fun personalities driving the show Mike and Nathan. Listen once, Listen twice, and you will be sold on the program. @TechtimeRadio | #TechtimeRadio.com | www.techtimeradio.com

  1. 4H AGO

    297: Cybersecurity Hiring is Shifting Fast as AI Fluency Is Becoming a Baseline Skill. Spotify’s Human‑Verified Music Labels, The AI Layoff Boomerang, and Sony’s 30‑Day Digital License Debate. Business Security Transparency? | Air Date: 5/5 - 5/

    Episode 297: The next wave of cybersecurity hiring is sending a clear message: without AI fluency, you may not be employable. U.S. cyber pipelines are adding AI skill requirements because modern defense now includes securing AI systems themselves. From semi‑autonomous agents to fast‑moving model‑driven threats, the job is shifting fast. If you’re aiming for a government or enterprise cyber role, this breakdown clarifies what real “AI fluency” means and why it’s becoming a baseline skill. Then we pivot to the strange side of AI behavior and why it still matters. OpenAI models inserting goblins and gremlins into answers sparks a deeper look at alignment, training loops, and user trust. Spotify’s human‑verified artist badges highlight how authenticity is becoming a product feature as AI‑generated music floods platforms. We close with the AI layoff boomerang and Sony’s 30‑day license check debate, raising big questions about transparency, ownership, and the future of digital media. Tune in to TechTime Radio—where the future is now, the stories matter, and all with a little whiskey on the side. -- Full Episode Details: The next wave of cybersecurity hiring is sending a blunt message: if you don’t understand AI, you may not be employable. We dig into reports that U.S. cyber pipelines are adding AI skill requirements because modern defense isn’t just about protecting networks anymore. It’s about using AI for defense while also securing AI systems themselves, from semi autonomous agents to fast moving model driven threats. If you’re aiming for a government or enterprise cyber role, this one helps you think clearly about what “AI fluency” actually means in practice and why it’s becoming a baseline skill. Then we pivot to the weird side of AI behavior and why it still matters. A story about OpenAI models that wouldn’t stop inserting goblins, gremlins, and other cryptids into answers turns into a real conversation about AI alignment, training feedback loops, and the growing trust gap between users and the systems they rely on. If AI can pick up bizarre habits that fast, what else is it learning and when do those quirks become a real problem? We also talk about AI generated music and the reason Spotify is rolling out human verified badges. With AI tracks blending into TikTok trends and streaming playlists, transparency and authenticity are turning into product features. To close, we hit the AI layoff boomerang where companies cut jobs “because of AI” and then scramble to rehire, plus a heated take on Sony’s 30 day license countdown for some digital games and what it means for digital ownership versus physical media. If any of these stories made you go “hmm,” subscribe, share the show, and leave a review so more people can find Tech Time Radio. Support the show

    59 min
  2. APR 28

    296: Meta’s Massive Layoffs, Billion‑Dollar AI Bets, Musk Versus Altman Drama, Runaway Robotics, Drone Delivery Dreams, Tech Fails, And The Strange Future Of Automation Collide In One Wild, Whiskey‑Fueled Episode | Air Date: 4/28 - 5/4/26

    Big tech is making a blunt trade: fewer people, more AI. We dig into Meta’s plan to cut more than 10% of its workforce while pouring an eye-watering budget into AI, then zoom out to the uncomfortable pattern across the industry where payroll turns into infrastructure spend. Along the way we hit a surprisingly human twist: one of the biggest uses of AI isn’t coding or design, it’s companionship and therapy, which says a lot about where our culture is headed. From there, we step into the billionaire arena with Elon Musk versus Sam Altman. We walk through the origins story, the lawsuit stakes, and why governance fights in court can shape the future of artificial intelligence more than public mission statements ever will. If the case slows OpenAI or forces structural changes, it could ripple through the entire AI race, and we’re all along for the ride whether we asked for it or not. Then we get practical and a little weird: robots and drones delivering dinner for “one dollar,” TechNeck and the anxiety economy, Cornell’s microbubble cleaning breakthrough, and a humanoid robot half marathon that jumps from novelty to serious capability in a single year. We also pressure-test “AI safety” messaging with Meta’s new AI Insights for parents, and we hand out a Technology Fail of the Week to an AI tractor that promised the future but couldn’t handle real farms. We cap it all off with a Green River Kentucky Straight Bourbon tasting and a debate about whether the next decade feels more like Terminator or WALL-E. Subscribe for weekly tech news with zero politics, share the episode with a friend who needs a “hmm” moment, and leave a review with your take: are we automating toward freedom or dependence? Support the show

    59 min
  3. APR 21

    295: AI Exploits, Chatbot Chats Used As Evidence, Roblox Safety Fallout, Biometric Id Battles, Deepfake‑Driven Trust Collapse, Scam Mailbag Chaos, Starlink Outages, And Even Robot Boars — Online Safety Is Getting Expensive | Air Date: 4/14- 4/20/26

    AI is getting so good at faking reality that the internet is starting to demand proof you are a human, and that is where this week gets unsettling. We talk about Anthropic’s “Mythos” cybersecurity AI and why federal agencies reportedly went from pushing it away to urgently trying to get access again. When a model can map vulnerabilities, hunt zero-day weaknesses, and chain exploits across real networks, the conversation shifts from “cool AI” to “who controls the keys to the digital world.” Then we hit the courtroom: a judge rules that private chats with an AI assistant are not protected like attorney-client privilege, and that should change how all of us use chatbots for legal advice, work problems, and personal issues. If you assume your AI prompts are confidential, you are taking a risk you might not even realize you are taking. From there we get practical. We break down scam emails and social engineering tactics you can recognize fast, including fake invoices that push you into calling “support” and installing remote access tools, plus Microsoft 365 “password expires today” phishing pages designed to harvest credentials. We also talk Roblox child safety, the push toward iris scans via World ID for Zoom and Tinder style impersonation, a Starlink outage that stalls Navy autonomous vessels, and even a humanoid robot chasing wild boars in Poland. If you want smarter scam defenses and clearer context for the biggest AI security stories, subscribe to Tech Time Radio, share this with a friend who clicks too fast, and leave us a review so more people can find the show. Support the show

    56 min
  4. APR 14

    294: This Week We Hit AI Warning Signs, Blue‑Light Myths, Meta’s Youth‑Harm Fight, Data‑Breach Fallout, Retro‑Camera Tech, Gen Z Streaming Hacks, And A Sip Of Abasolo Whiskey. Buckle Up For A Sharp, Fast Hour On TechTime Radio | Air Date: 4/14

    Episode 294: This week on TechTime Radio, we dive into rising AI safety warnings as OpenAI and Anthropic split on governance, explore why blue‑light panic became a myth, and break down Meta’s expanding fight over youth harm and accountability. We also cover new cybersecurity breaches, including Eurorail’s exposed traveler data. Then we spotlight a retro‑camera gadget that turns classic film bodies into digital shooters, share Gen Z’s clever streaming‑service rotation trick, and wrap with a tasting of Abasolo Mexican whiskey. Tune in to TechTime Radio—where the future is now, the stories matter, and all with a little whiskey on the side. -- Full Episode Details: Something feels different this week: the tech news isn’t just “new features,” it’s warning lights. We break down the growing gap between OpenAI and Anthropic and what it signals about artificial intelligence safety, AI governance, and the reality of trying to contain powerful models. When researchers say a system can surface security flaws and act without being asked, the important question isn’t sci-fi, it’s responsibility. Who audits it, who controls access, and what happens when the incentives to ship beat the incentives to slow down? Then we pivot to a myth a lot of us have paid for: blue light panic. We talk through why the “your phone is ruining your sleep” narrative got so sticky, how marketing turns weak evidence into a product category, and what actually drives circadian rhythm disruption. The practical takeaway is simple: your habits matter more than a filter, and doomscrolling at 1 a.m. is a bigger problem than the shade of your screen. We also hit the big platform story of the week: Meta and the expanding fight over social media addiction, youth harm, and accountability. From there we move into everyday cybersecurity with a brutal reminder about data breaches, including Eurorail’s exposed traveler information. To lighten things up, guest Gwen Way brings a summer-ready gadget pick, the I’m Back Roll, which turns classic film cameras into digital shooters, plus we share a Gen Z approach to streaming subscription management that cuts bills by rotating services. Add a tasting of Abasolo Mexican whiskey and you’ve got a fast, funny hour with real takeaways. If you got value from the conversation, subscribe, share the show with a friend, and leave a review so more people can find Tech Time Radio. What story are you still thinking about after listening? Support the show

    58 min
  5. APR 7

    293: Deepfakes Erode Trust, Data Requests Surge, and Expert Nick Espinosa Warns How Privacy is Shifting. IRS AI Risk Scoring Raises Profiling Fears, Workplace "AI JUNIOR" Tells the Boss Everything, and China’s Robotaxis Freeze | Air Date: 4/7- 4/13/26

    Episode 293: This week on TechTime Radio, we begin by confronting the unsettling reality that trusting your senses isn't enough anymore, as deepfakes and AI-generated voices make distinguishing real from fake increasingly difficult. Even families and public figures encounter moments when authenticity is in doubt, fostering the 'liar’s dividend' in which dismissing everything as fake becomes common. The discussion considers why traditional code words are now a safeguard for families, executives, and teams who need to verify identities when it matters. From there, we broaden our view to the growing data traces left behind in daily life, where reducing posts can help, yet government demands for user data continue to rise. Cybersecurity expert Nick Espinosa explains what this means for privacy, digital footprints, and how platforms subtly influence what they know about you. We conclude with the future of AI monitoring tools like Junior in workplaces. In China, the chaos was real—hundreds of robotaxis froze on the streets, causing a bizarre, self-created traffic jam that showed even ‘smart’ cars can fail spectacularly. Tune in to TechTime Radio—where the future is now, the stories matter, and all with a little whiskey on the side. Full Details: You can’t just “trust your eyes and ears” anymore and that changes everything. We start with the uncomfortable reality of deepfakes and AI voice cloning: even family members can hesitate when a voice sounds right, and public figures can get labeled “AI” over a simple lighting glitch. That’s the liar’s dividend in action, where it’s easy to claim something is fake and frustratingly hard to prove it’s real. We talk through a surprisingly effective defense that feels like a throwback: shared code words for families, executives, and teams when identity actually matters. Then we zoom out to the data exhaust behind modern life. Posting less on social media can be digital self-preservation, but government requests for user data keep climbing across major platforms. Our guest, cybersecurity expert Nick Espinoza, explains why that trend should change how you think about privacy, digital footprints, and what platforms really know about you. From there, we dig into the IRS using AI tooling built with Palantir to identify “high value” cases, and why opaque risk scoring plus third-party data creates real concerns about profiling, audit targeting, and accountability. Finally, we hit the workplace and the weird future of “always-on” monitoring. Tools like Junior act like a virtual colleague that sits in your Slack and Zoom, watches deadlines, and escalates issues to management. Add in reports of AI agents that deceive, bypass safeguards, or game constraints, plus real-world robotaxi failures, and the central question becomes urgent: how do we keep human systems fair when automation is faster than oversight? Subscribe to Tech Time Radio, share this with a friend who worries about AI privacy, and leave us a review with the biggest AI trust issue you want us to tackle next. Support the show

    58 min
  6. MAR 31

    292: What Happens When Machines Become The Main Users Online, Big Tech Could Lose Legal Protection Over Addictive Social Apps, Toilet Broadband Plus Other April Fools Tech Lore, and Why Networks Are Shifting To AI Data Centers | Air Date: 3/31- 4/6/26

    AI is quietly taking the wheel of the internet, and the ride is getting weird. We’re seeing data centers merge with cloud platforms, edge computing, and telecom networks into one distributed machine that can predict failures, reroute traffic, and optimize energy in real time. That sounds amazing until you realize how much of today’s traffic is no longer humans, but machines talking to machines, and every company’s AI is fighting for the “best” path across the same shared pipes. Then we jump to a legal shift that could hit social media and online video hard: juries labeling Meta and YouTube as “defective products” over addictive design and harm to kids. We talk through what it means if courts stop treating Section 230 like an all-purpose shield, and we wrestle with the messy tradeoffs. More safety and accountability? Or a future of over-censorship, weaker privacy, and platforms ripping out end-to-end encryption just to reduce liability? Security headlines keep the pressure on. From a high-profile personal email hack to a banking app glitch that exposed other customers’ transactions, this week is a reminder that “it probably won’t happen to me” is not a strategy. We keep it Tech Time Radio style with April Fools tech lore, a spirited Apple product rant, and a French whiskey tasting to round it out. If you like smart tech news with humor and practical takeaways, subscribe, share the show with a friend, and leave a review so more people can find us. Support the show

    58 min
  7. MAR 24

    291: Explore Shifting Digital‑Privacy Rules, a Malfunctioning Humanoid Robot, Lively Hardware Debate on Apple's NEO, AI‑Driven Entertainment Trends, all while the FBI Spies on You, and with a little whiskey on the side | Air Date: 3/24- 3/30/26

    Your digital life is being priced, packaged, and sold, and sometimes the buyer is the government. We dig into the headline that reignites America’s privacy debate: the FBI confirming it purchases commercially available data that can be used to track Americans online. We talk about why this feels like a warrant shortcut, how the data broker economy thrives on “legal” loopholes, and why AI-powered analysis makes mass surveillance more scalable than ever. Then we shift from invisible tracking to very visible chaos: a humanoid robot in a restaurant reportedly loses spatial awareness and starts thrashing near tables. It sounds hilarious until you remember hot soup, tight spaces, and the fact that a “kill switch” only helps if staff know how to use it. We use the moment to ask a bigger question about robotics safety in public spaces: do we actually need humanoid performers, or are simpler service bots the smarter design? From there, we debate the MacBook Neo phenomenon, the kind of budget-friendly Apple product that sells fast and starts arguments even faster. We break down what people really buy when they buy a brand, where performance limits matter, and why “good enough” tech can be both practical and frustrating. We also tackle the unsettling edge of AI in Hollywood, including the plan to use a generative AI replica of Val Kilmer, and what consent, taste, and likeness rights should mean when an actor is no longer here to speak for themselves. We wrap with hard security reality: a major benefits data breach exposing sensitive identity details, plus a surprising ransomware trend where fewer victims pay even as attacks rise. If you like smart tech news with real opinions and a little whiskey on the side, subscribe, share the episode, and leave us a review so more people can find Tech Time Radio. Support the show

    58 min
  8. MAR 17

    290: This week, We Blend Quirky Tech "FARTS" into Real‑World Data. Starting with Digestion‑Tracking Wearables to Hollywood’s Push for One‑Minute Vertical Dramas and the Reality Behind Wi‑Fi 7 Marketing Claims | Air Date: 3/17- 3/23/26

    A wearable that logs your digestion by tracking hydrogen “events,” Hollywood betting big on one-minute vertical soap operas, and Wi‑Fi 7 routers that may not do what the box implies, this hour is packed with the kind of technology news that makes you stop and go, “wait, is that real?” We take each headline and separate the joke from the actual value, because the story behind the gimmick is usually where the truth lives.  We also shift into practical mode with a stack of real scam and phishing emails that show how people get trapped by urgency, fake account warnings, and that tempting unsubscribe link. We talk through the easiest tells like mismatched sender domains, scripts that don’t match the offer, and why “just click to verify” is still one of the most effective social engineering moves online. If you’ve got family members who get nervous when they see “final notice,” this segment is worth sharing.  From there, we hit modern tech contradictions: Tinder trying to fix dating app fatigue by pushing in-person singles events, and a promising offline AI board that runs local inference without relying on cloud services. Edge AI and offline AI can mean faster responses, fewer privacy risks, and less dependence on internet outages, but it also raises real questions about updates and long-term support. Subscribe for more consumer tech reality checks, share the show with a friend who needs scam-proofing, and leave us a review with the strangest tech headline you’ve seen lately. Support the show

    56 min

Hosts & Guests

5
out of 5
13 Ratings

About

You can grab your weekly technology without having to geek out on TechTime with Nathan Mumm. The Technology Show for your commute, exercise, or drinking fun. Listen to the best 60 minutes of Technology News and Information in a segmented format while sipping a little Whiskey on the side. We cover Top Tech Stories with a funny spin, with information that will make you go Hmmm. Listen once a week and stay up-to-date on technology in the world without getting into the weeds. This Broadcast style format is perfect for the everyday person wanting a quick update on technology, with two fun personalities driving the show Mike and Nathan. Listen once, Listen twice, and you will be sold on the program. @TechtimeRadio | #TechtimeRadio.com | www.techtimeradio.com

You Might Also Like