SlatorPod

Slator

SlatorPod is the weekly language industry podcast where we discuss the most important news and trends in translation, localization, interpreting, and language AI. Brought to you by Slator.com.

  1. #281 What Is AI Audio Separation with AudioShake CEO Jessica Powell

    18H AGO

    #281 What Is AI Audio Separation with AudioShake CEO Jessica Powell

    Jessica Powell, CEO of AudioShake, joins SlatorPod to talk about how AI-powered audio separation is making audio more usable for both human and machine workflows, and enabling new use cases across localization, broadcasting, and media production.  Jessica emphasizes that early traction came from the music industry, particularly in areas like sync licensing and remixing. However, the company’s expansion into film and television happened organically as new use cases emerged. The CEO explains that AudioShake’s core technology uses source separation to break complex audio into individual components such as dialogue, music, and sound effects. She describes how this allows users to gain precise control over audio for tasks like editing, transcription, and multilingual dubbing. In localization, Jessica highlights how separating dialogue from music-and-effects (M&E) tracks enables both traditional dubbing and AI-assisted workflows, particularly for legacy content where original stems are unavailable. Beyond localization, Jessica underscores the importance of clean audio inputs for speech recognition systems. In noisy environments like sports broadcasts or unscripted content, separating dialogue before transcription significantly improves accuracy. Jessica also reflects on the broader AI landscape, noting that the rise of generative AI has increased awareness of audio as a critical modality. However, she distinguishes AudioShake’s work as non-generative, focused on extracting structure rather than creating new content. The CEO discusses the current funding environment in the Bay Area and how the investor narrative has evolved leading up to AudioShake’s late 2025 Series A. Looking ahead, Jessica points to real-time processing and copyright-compliant audio editing as key areas of innovation, as the company continues to expand its role in media and AI ecosystems.

    39 min
  2. #280 Walmart Cuts Translation Costs, 10 LTP Growth Hacks

    MAR 13

    #280 Walmart Cuts Translation Costs, 10 LTP Growth Hacks

    Daniel Sebesta joins Florian and Esther on the pod to talk about the latest language industry news, AI translation developments, and key insights from the Slator Pro Guide: Growth Hacks for Language Technology Platforms (LTPs). The trio begin with TransPerfect’s latest financial results, which reported USD 1.32 billion in revenue, up 7% year on year. They also discuss leadership changes at Straker, where founder Grant Straker stepped down as CEO after more than 25 years. Florian shares new AI-powered contextual features in Google Translate that allow users to refine translations and adjust tone or phrasing. Daniel believes these interactive capabilities aim to improve trust in AI systems by giving users more visibility and control over translation outputs. The discussion also turns to ElevenLabs and its partnership with Deutsche Telekom to embed live translation into phone calls. The integration could enable real-time multilingual conversations, summaries, and contextual assistance for telecom customers. The trio then cover Walmart’s internal AI localization initiative, where the system now translates millions of catalog items across 22 languages while reducing translation costs by about 99%. Daniel concludes by outlining the Growth Hacks Pro Guide, which explores strategies for scaling LTPs. He highlights areas such as go-to-market strategy, partnerships with language solutions integrators, enterprise sales execution, and security readiness as key drivers of scalable growth.

    43 min
  3. #279 Why Phrase Doubles Down on a Platform Strategy with CEO Georg Ell

    MAR 10

    #279 Why Phrase Doubles Down on a Platform Strategy with CEO Georg Ell

    Georg Ell, CEO of Phrase, returns to SlatorPod for round 3 to talk about how the language technology platform (LTP) is evolving amid the AI boom and the shifting dynamics in enterprise SaaS. Georg shares how Phrase has doubled down on a platform and ecosystem strategy that encourages customers to build solutions on top of the LTP’s system rather than forcing them into a closed system. The CEO addresses the broader AI narrative affecting SaaS companies and explains that investor uncertainty about long-term software value has created anxiety across the sector. Georg argues that the AI boom has triggered a “build vs buy” debate inside many enterprises, with engineering teams experimenting with internal solutions. He explains how the gap between building a demo versus running a reliable, scalable system is where most internal projects fail. Georg notes that core AI translation quality improvements seem to be plateauing, but AI continues to significantly enhance the layers surrounding translation. He highlights improvements in context handling, evaluation, automated post-editing, and orchestration that allow companies to translate more content at lower human review rates. The CEO says localization must move beyond cost reduction narratives and instead focus on business outcomes such as hiring efficiency, support performance, and revenue metrics.  Georg predicts 2026 will bring more production-grade AI applications, including personalization, multimodal content, and automation across the enterprise. He believes language technology will be framed as content adaptation and delivery rather than simply translation.

    50 min
  4. #276 ChatGPT Translate and Weird Prompts

    JAN 30

    #276 ChatGPT Translate and Weird Prompts

    Florian and Esther discuss the language industry news of the past few weeks, starting with senior hires in revenue and operations at DeepL and what this signals about the LTP’s next phase. The duo then turns to new data from AI labs and hyperscalers, where Florian highlights findings from Anthropic’s research showing AI is settling into a support role rather than full automation, with usage concentrated around review and validation, and humans remaining firmly in the loop. On the consumer side, Esther points to Microsoft Copilot data showing translation and language learning as one of the most common everyday AI use cases. Florian flags Adobe’s new “Translate this PDF” feature, where formatting was the main issue rather than translation accuracy. The conversation then shifts to infrastructure, where Florian emphasizes how NVIDIA is positioning itself at the center of real-time multilingual voice ecosystems by open-sourcing models while driving demand for its hardware. The duo unpacks OpenAI’s quiet launch of ChatGPT Translate. Esther notes that reactions have been mixed, with many seeing the interface as basic, while Florian stresses the strategic importance of the move. Then the two disagree on whether or not the AI’s default prompt to make the translation sound “more fluent” makes any sense. Esther walks through recent M&A activity and funding rounds, highlighting acquisitions in Europe and the US alongside major raises by Synthesia, Deepgram, and reportedly ElevenLabs. Florian concludes with a look at an S-1 filing by a tiny company, using it as an example of how the US capital markets accommodate everything from billion-dollar AI firms to survival-stage experiments.

    37 min
  5. #275 The Future of Language and Translation Education with JC Penet and Joss Moorkens

    JAN 16

    #275 The Future of Language and Translation Education with JC Penet and Joss Moorkens

    JC Penet, Reader in Translation Industry Studies at Newcastle University, and Joss Moorkens, Associate Professor at DCU, join SlatorPod to talk about the new open-access book Teaching translation in the age of generative AI: New paradigm, new learning? The duo explains how large language models (LLMs) have a different impact than earlier machine translation breakthroughs as they generate human-like text, respond to prompts, and adapt output to context.  Public hype around LLMs has affected demand for some translators and fueled misconceptions around the value of studying translation. Although, JC and Joss stress that translation education must adapt. JC outlines how students need to assess whether output is appropriate for purpose, audience, risk, and context. This places greater importance on skills such as selection, evaluation, and effective prompting, while still relying on core linguistic and cultural competence. Joss adds that this shift reflects real industry practice, where different content types already receive different levels of automation and human involvement. Drawing on healthcare research, he highlights how AI can outperform traditional workflows in some contexts but fail badly in others, especially across languages with uneven data coverage. Joss also highlights ethical blind spots that arise when performance metrics dominate decision-making. He describes a “triple bottom line” approach that weighs people, planet, and performance equally. On fears of de-skilling, JC argues that excluding AI from classrooms poses a greater risk. Without guided engagement, students may use tools uncritically or fail to develop AI literacy altogether. Joss points to initiatives such as LT-LiDER, an Erasmus+ project designed to build AI literacy among educators. Looking ahead, the duo contends that studying languages and translation remains valuable because it develops deep reading, critical thinking, intercultural awareness, and adaptability.

    50 min
4.3
out of 5
6 Ratings

About

SlatorPod is the weekly language industry podcast where we discuss the most important news and trends in translation, localization, interpreting, and language AI. Brought to you by Slator.com.

You Might Also Like