Humans of Martech

Phil Gamache

Future-proofing the humans behind the tech. Follow Phil Gamache and Darrell Alfonso on their mission to help future-proof the humans behind the tech and have successful careers in the constantly expanding universe of martech.

  1. 218: Tata Maytesyan: Build a marketing career that survives AI as a deep generalist

    6 DAYS AGO

    218: Tata Maytesyan: Build a marketing career that survives AI as a deep generalist

    What's up everyone, today we have the pleasure of sitting down with Tata Maytesyan, Growth Consultant, Keynote Speaker, and AI Trainer. (00:00) - Intro (01:08) - In This Episode (01:46) - Sponsor: GrowthLoop (02:50) - Sponsor: Attribution App (04:34) - Which Marketing Tasks Are Actually Worth Automating (13:07) - Why Deep Generalists Outperform Channel Specialists in Marketing (26:07) - Sponsor: MoEngage (27:04) - Sponsor: Knak (35:06) - Why Marketing Org Charts Are Not Getting Flatter (43:01) - Why Change Management Determines Whether AI Adoption Actually Sticks (48:03) - The Fear of Automating Yourself Out of a Job (53:13) - The Voice Diary Technique for Tracking Your Own Energy at Work Summary: Tata Maytesyan runs an AI bootcamp for marketers on Maven and consults with scaling companies across Europe. In this episode, she breaks down why the best AI automation targets are the boring, repeatable tasks nobody talks about on LinkedIn, and why the specialist-to-generalist shift in marketing is already happening whether your org chart reflects it or not. She also gets direct about what's really going on inside companies claiming to go flat, the 100-hour threshold for building genuine competence across domains, and the self-preservation fear she hears from leaders every week. If you have ever wondered whether you are building your career around the right foundations, this episode is worth your full attention.About Tata Maytesyan Tata Maytesyan is the founder and CEO of Grow Global Tech, where she builds AI-powered marketing systems for tech scale-ups and runs a hands-on AI bootcamp for marketers on Maven. She spent 15+ years leading growth inside Nike, Deloitte, and Picsart, including a stint as Head of Product Strategy and Operations for Picsart's content and AI division, a platform with over 100 million monthly active users. She has since advised more than 40 companies across 12 countries on go-to-market strategy and AI adoption, and consults primarily with CMOs and CEOs at companies between a few million and $200 million in annual revenue. Which Marketing Tasks Are Actually Worth Automating The wrong starting point for AI adoption in marketing is inspiration. Most marketers scroll LinkedIn for jaw-dropping use cases: ad creative generated at scale, competitive analysis in 10 minutes, entire campaign briefs written by agents. It looks impressive. It's also almost never applicable to your specific job on any given Tuesday. Tata has spent years watching this pattern play out with consulting clients and bootcamp students. Her fix is deliberately boring. At the start of every engagement, she asks everyone in the room to close their AI tools. Then she opens Miro and maps how the team actually works. From there, 3 questions run against every process on the board: how often the task repeats, how acceptable an imperfect output would be, and whether it's something you actually enjoy doing. Those 3 questions quietly eliminate most of what people think they want to automate. Frequency kills off exciting-but-rare workflows not worth touching. Risk tolerance separates contexts where imperfect output is acceptable (most content tasks) from those where it isn't. Tata advises a healthcare client where certain work is patient-facing, and mistakes there carry real consequences. The enjoyment filter protects the parts of the job people actually like, because automating something you love is just spending money to make work less interesting. Her own example from the day this episode recorded: she built a script to pull LinkedIn post metrics (impressions, comments, likes) into Notion. Before that, an assistant handled it. Before that, she did it herself. She describes the task with open contempt, which makes it the perfect candidate: something done constantly, where imperfect output is acceptable, and which requires 0 joy to hand off. She calls it boring is sexy. "Figure out the workflow you do repeatedly, and then if mistakes are manageable and you're okay with them, delegate and automate with AI." People get frustrated when they hear this. You show up to a bootcamp or hire a consultant expecting to leave with something impressive. Instead someone hands you a whiteboard. But Tata is direct about the tradeoff: "It takes time and it slows you down, sort of feels like it slows you down. In fact, it speeds you up." The same logic applies to how people first explore AI tools. Pure tinkering has value: testing a new model, playing with a capability outside any work context. That's curiosity, and it's worth protecting. But when something needs to work reliably in your actual job, setup is non-negotiable: context files, folder structure, clear instructions. The AI can't fill in what you don't give it. The most durable AI workflows come from people who got honest about which parts of their week are boring, repetitive, and low-stakes. LinkedIn will give you inspiration. Your Miro board will give you your actual starting point. Key takeaway: Map your actual workflow before opening any AI tool. For each repeated task, ask whether mistakes are acceptable and whether you actually enjoy doing it. Frequent, low-risk, low-joy work is the right first target. Build from there. Why Deep Generalists Outperform Channel Specialists in Marketing There's a running debate in marketing about whether to go deep in a specialty or build broad across domains. The specialist argument has genuine weight: if you've never actually run an SEO campaign, how do you know when an AI is confidently producing garbage? Tata sees the point. She also thinks the framing is wrong. Specialization built around channels is the vulnerability, and channels keep changing. Her term for what marketers should actually become is "deep generalist," a phrase she found on the internet and adopted because it captures something the T-shaped marketer framework mostly misses. A deep generalist has real expertise in at least 1 domain but deliberately builds breadth around it. The depth is still there. The difference is the deliberate horizontal stretch. She watches this compression play out in her bootcamp every cohort. At the start of cohort 6, a participant said her team of 4 had been cut to just her. As the remaining content writer, she was now responsible for everything: SEO, social, website, the whole thing. That's not a future prediction. It's already the operational reality for a large share of the marketing workforce, and the people who trained deep in a single channel with no adjacent experience are the ones struggling most. The channel argument is where Tata's case gets sharper. An "SEO specialist" built around Google search has a real problem now that AI Overviews are reshaping how search works. Nobody building a "TikTok specialist" career a few years ago expected it to become a top-performing B2B SaaS ad channel. But 1 VP of business development recently told Tata that's exactly what's happening at their company. Channels are fluid. Betting deep on any specific 1 locks you into an increasingly narrow position. Her own example: at Picsart, 1 division had no SEO function and no budget for an agency. Tata spent 2 months doing the SEO work herself, learning enough to direct AI through the process. When the business eventually hired an SEO agency, the agency was impressed by what was already in place. She had put in enough time to know what good SEO looked like and how to direct AI against that standard effectively. The underlying skill that makes all of this work is judgment. Generating an image is table stakes. Knowing whether it's good, whether it fits, whether an agent's output is trustworthy enough to use: those require domain awareness that a speciali...

    56 min
  2. 217: How to interview a company before you take the job (The Martech job hunt survival guide, part 3)

    28 APR

    217: How to interview a company before you take the job (The Martech job hunt survival guide, part 3)

    Summary: This episode closes Phil and Darrell's 3-part series on the marketing ops job market with the question they've been building toward: what do you ask the company? Darrell shares a firsthand account of taking a job under financial pressure, ignoring red flags he recognized in the moment, and landing in a toxic environment within months. What follows is a structured set of interview questions across 6 categories, from leadership self-awareness to what happened to the last person in the role, designed to help you separate the job offer from the job reality. If the only question you've ever asked at the end of an interview was about growth opportunities, this episode is going to change how you think about that conversation. In This Episode: (00:00) - Intro (01:09) - In This Episode (01:42) - Sponsor: MoEngage (02:40) - Sponsor: Knak (06:06) - What to Figure Out Before You Ask a Single Interview Question (12:19) - How to Test a Hiring Manager's Self-Awareness in a Single Question (18:14) - How to Find Out If a Hiring Manager Can Handle Being Wrong (24:37) - Sponsor: GrowthLoop (25:41) - Sponsor: Mammoth Growth (26:46) - Why "When Did You Last Take a Vacation?" Is the Most Revealing Culture Question (32:09) - How to Find Out If a Company Sticks to Its Priorities or Changes Them Every Quarter (36:31) - How to Find Out What a Marketing Ops Role Actually Requires Before You Accept It (46:04) - Why Fear in a Peer Interview Is the Red Flag You Should Never Ignore What to Figure Out Before You Ask a Single Interview QuestionThe US healthcare system has a way of making bad career decisions feel necessary. When you're laid off with a family depending on employer-sponsored coverage, the clock starts immediately. Every week without an offer is another week closer to COBRA. That pressure doesn't make people irrational. It makes the math of a job offer feel different than it normally would. Darrell Alfonso was in that position last year. A few months after getting laid off, he received what looked like a career comeback: a higher title, more responsibility, better pay, and benefits. The package was attractive enough that he pushed aside doubts surfacing during the process. He knew some things felt off. He took the job anyway. Within 2 months, he was having near-anxiety attacks, sleeping poorly, and barely present with his family. He left quickly. He has no regrets. Most interview prep points in a single direction: getting the offer. Candidates research companies, rehearse answers, and practice looking calm under pressure. The harder question, whether the offer is worth taking, gets almost no airtime. Phil frames this episode as being for people with enough options to ask both. That might mean multiple offers in play, the ability to keep searching while still employed, or simply enough runway to be selective. If you're in survival mode, some of this will still apply. But the questions work best when you have the leverage to actually act on the answers you get. Before choosing which questions to ask, decide what you're trying to find out. Phil and Darrell use what makes you happy at work as the starting filter. For some people it's ownership and interesting problems. For others it's stability, predictable hours, or family-friendly flexibility. Darrell puts the manager relationship at the top. Your boss marks your performance, sets your priorities, and shapes whether it feels safe to admit you're stuck or struggling. Career advice tends to understate how much that single variable determines whether someone thrives or burns out, regardless of how strong everything else looks on paper. The candidates who ask the sharpest questions are usually the ones who did that harder internal work first. Key takeaway: Before your next round of interviews, write down 3 things that would make you miserable in a role. Be specific: not "bad culture" but things like "a boss who overrides my work constantly" or "no flexibility on hours." Use that list as your filter when deciding which questions to prioritize. If a company can't answer those 3 things in a way that gives you confidence, the decision gets harder than it needs to be. How to Test a Hiring Manager's Self-Awareness in a Single Question The most common reason people leave jobs is their manager. That gets cited often but rarely changes how candidates behave in interviews. Most people assess for chemistry from the vibe of the conversation, look for red flags in the standard answers, and hope the hiring manager turns out to be reasonable. Phil uses a more deliberate approach. His bank of questions for probing leadership self-awareness: What's something leadership got wrong in the last year?, What feedback do you get most often as a hiring manager?, What decision would you revisit if you could?, What's changed about how you lead over time?, What's something you're still figuring out about your leadership style? The first 1 does the most work. Every leadership team makes mistakes. If a hiring manager can't name 1, they're either hiding something or genuinely can't reflect on their own decisions. The answer that matters isn't the mistake itself. It's whether they can describe it clearly, explain what they took from it, and say what changed. Darrell pushes the same idea with a different angle: ask what issues a hiring manager has had with a former leader, or with a former direct report. If the answer sounds carefully managed, nothing too specific, nothing too negative, that polish is informative. People who have actually led teams through difficult stretches can name them. They have timelines, outcomes, and lessons. Vague answers suggest either limited experience or a preference for impression management over honesty. Phil's version of the final question in this category is direct: describe your worst boss ever, and why were they the worst? A hiring manager who answers with a real story, including what it cost their team and how they changed as a result, is giving you the most reliable signal available in a 30-minute conversation. Darrell used a version of this in a recent interview. He was upfront with his prospective boss about coming from a toxic environment. She responded by citing 2 specific bosses who had made her professional life difficult, described what each 1 got wrong, and connected it to how she tries to lead now. That answer built more confidence than the rest of the process combined. Leadership self-awareness is a practice developed through confronting moments where instincts were wrong and the team paid for it. The managers worth working for have had those moments and can talk about them specifically. The ones who can't usually haven't processed them. Key takeaway: Ask your next hiring manager: "What's something leadership got wrong in the last year?" Write down the answer verbatim as soon as the conversation ends. If the response is vague, hedged, or completely absent, you now have a data point that no amount of external research could give you. The managers worth working for have made real mistakes and can describe them specifically. How to Find Out If a Hiring Manager Can Handle Being Wrong There's a version of leadership that gets tolerated more than it should: the manager who hires people with deep expertise and then ignores them. The org chart implies delegation. The day-to-day contradicts it. You spend months delivering work that gets overridden by someone who hired you for your judgment and then second-guesses every call you make. Phil's set of questions for this goes directly at the pattern. Rather than asking whether a hiring manager is open to feedback in the abstract, ask for a specific instance: can you describe a time when s...

    54 min
  3. 216: How to stand out as a candidate with AI prep, portfolios and tools (The Martech job hunt survival guide, part 2)

    21 APR

    216: How to stand out as a candidate with AI prep, portfolios and tools (The Martech job hunt survival guide, part 2)

    What’s up everyone, today we continue with part 2 of a 3 part series we’re calling The Martech Job Hunt Survival Guide. Part 2 is: How to stand out as a candidate with AI prep, portfolios and tools. Summary: Phil and Darrell spent this episode breaking down what actually moves the needle when you’re searching for a role: building the portfolio that almost no marketing ops professional bothers to save, navigating the AI experience question, knowing when to take a contract role instead of holding out, and skipping the AI job-search tools that make you look like everyone else. The honest observations from Darrell’s own recent job search make this one worth listening to, including why the colleagues most reluctant to make a lateral move are still searching months later. In this Episode… (00:00) - Intro (01:01) - In This Episode (01:30) - Sponsor: Mammoth Growth (02:36) - Sponsor: GrowthLoop (05:24) - Why Hiring Managers Can't Actually Evaluate Your AI Experience (08:26) - How to Build a Marketing Ops Portfolio When Your Work Is Buried in Tools (17:56) - Why Creating LinkedIn Content Works Even When Nobody Is Watching (25:32) - What Hiring Managers Notice First on Your LinkedIn Profile (30:10) - Sponsor: Knak (31:13) - Sponsor: MoEngage (34:13) - Why Contract Work Is a Strategic Move for Marketing Ops Job Seekers Right Now (44:02) - Which Job Search Tools Help and Which Ones Waste Your Time (56:18) - How a Video Introduction or Visual Resume Gets You Into the Next Round Why Hiring Managers Can't Actually Evaluate Your AI ExperienceEvery marketing ops job posting in 2026 has the same line buried somewhere in the requirements: "proven experience delivering results with AI." Walk into any interview and within the first few minutes someone will ask you to describe what you've actually done with it. That question sounds reasonable until you realize the person asking usually has no idea what a good answer looks like. Darrell came out of a recent job search with a clear read on this. The interview questions had shifted entirely. The old MarTech interview, the 1 that asks about your tool stack and campaign history, has been replaced. AI is now the primary filter. Companies want proof of results. But AI-driven marketing ops, as an actual practice, barely existed 3 years ago. Phil put the absurdity into 4 words: "5 years of AI experience." Everyone in hiring knows it's a joke. They're writing it anyway. The talent pool has gotten harder at the same time. Amazon's most recent layoffs displaced over 10,000 people. Layoffs at Google and across the broader tech sector added more. You're competing against that cohort now, which means the undifferentiated application is in worse shape than it's ever been. Everything has to be sharper. But the opening Darrell is pointing at is real. The hiring managers writing "proven AI experience required" often can't define what good AI usage looks like for a marketing ops role. They're expressing a priority while lacking any rubric to test it. When they ask the interview question, they're listening for someone who sounds like they know what they're talking about. Most candidates coming through don't. You feel it during prep, that uncomfortable awareness that you don't know exactly what they want from you. The honest truth is they don't either. That gap is yours. Research what AI actually does in marketing ops workflows: lead scoring automation, campaign orchestration, data governance, intent signal processing. Build 1 small example if you have the time. Frame your existing work in terms of where AI would fit and how you'd measure it. Darrell's framing: you can position as a credible AI enthusiast with very little preparation, because the bar inside most marketing orgs is low and most candidates aren't clearing it. The industry required AI fluency before building any way to evaluate it. That's not a problem. For candidates willing to do the homework most skip, it's the whole advantage. Key takeaway: Research 3 specific AI use cases in marketing ops before your next interview: lead scoring automation, campaign workflow agents, and CRM data deduplication are good starting points. Prepare 1 concrete story connecting 1 to work you've done or would do. If you haven't built anything yet, describe the workflow you'd build and how you'd measure its impact. Candidates who speak specifically and confidently about AI applications win these conversations, because they're often the only ones in the room who prepared. How to Build a Marketing Ops Portfolio When Your Work Is Buried in Tools Most marketing ops professionals have spent years doing meaningful, complex work. They've built lead scoring models, managed platform migrations, architected multi-channel campaign workflows. And if you asked them to show you any of it in an interview, most couldn't. The templates are gone. The diagrams were never made. The results are a rough number someone mentioned once in a meeting. Darrell has sat on the interviewer side of enough conversations to be direct: the portfolio problem in marketing ops is almost universal. Candidates describe their work verbally, and the person asking often can't follow it. There's nothing to point to, nothing to walk through, nothing that makes the experience tangible. In a field full of technical, visual, process-driven work, almost no 1 has anything to show. The bar to stand out is genuinely low. Darrell's starting point: if you've built a custom GPT, a Google Gem, or a basic AI agent using Zapier, that alone puts you ahead of most candidates. It takes about 10 minutes to build 1. It demonstrates something concrete about how you think and work. The same logic applies to documentation that almost no company does well: a clean diagram of your current or former tech stack, before-and-after views of a migration you led, a lead scoring template, a product requirements document for a tool evaluation. These are ordinary outputs of the job. Almost no 1 saves them. Phil's preferred format is the case study. Take a project you led, strip the confidential details, and walk through it as if you were an outside consultant brought in to solve the problem. What was the situation before you arrived? What did you do? What did it look like after? Specific numbers and percentages help, but they're not required. A clean diagram showing a tech stack before and after a migration, or a flow chart of a campaign workflow you built, communicates competence without a single metric. For quantifying impact when the numbers are murky, Darrell's suggestion is to use AI to reverse-engineer the math. If you cut campaign launch time by 20%, work backward through campaigns per quarter, leads generated, and pipeline influenced. You can build an intelligent, defensible estimate, and most candidates don't even try. The format doesn't need to be elaborate. A Google Slides deck linked from your resume, tracked with a Bitly vanity URL so you can see who opens it, is more than enough. The bigger benefit of building a portfolio at all is what it does to your interview prep. Reviewing your own work, articulating outcomes, distilling a project into a problem-action-result narrative means you've already done the thinking before anyone asks the question. Phil's point: the exercise of building the portfolio and the exercise of preparing for interviews are the same exercise. Key takeaway: Start with your most recent project and build 1 case study: the problem you walked into, what you built or changed, and the measurable outcome. Add a tech stack diagram if you don't have 1. Link both as a Google Slides deck from your resume and track opens with a Bitly URL. Even a basic portfolio puts you in ...

    1hr 1min
  4. 215: How to find hidden job opportunities (The Martech job hunt survival guide, part 1)

    14 APR

    215: How to find hidden job opportunities (The Martech job hunt survival guide, part 1)

    What's up everyone, today we kick off part 1 of a 3 part series we’re calling The Martech Job Hunt Survival Guide. Part 1 is: How to find hidden job opportunities. In This Episode: (00:00) - Intro (01:23) - In This Episode (01:59) - Sponsor: Knak (03:06) - Sponsor: MoEngage (04:49) - Why Getting Laid Off Is Always a Business Decision (08:52) - Building Career Security Before You Need It (10:51) - Networking Before You Need Anyone's Help (24:29) - Building Your Dream Company List Before the Job Search Starts (27:04) - Sponsor: Mammoth Growth (28:07) - Sponsor: RevenueHero (29:01) - Why AI Side Projects Give You a Real Edge in Job Interviews (34:50) - The 2026 Martech Job Market Reality Check (42:18) - The Ashby Search Hack and Why Referrals Beat Job Applications (49:58) - Hidden Job Boards and Staffing Firms Most Candidates Ignore (53:52) - Finding Martech Jobs at Stealth Startups Using VC Funding Alerts (55:31) - Why Fast-Growing Martech Agencies Are an Underrated Hiring Path Summary: This episode is a full playbook for martech and marketing ops professionals navigating 1 of the toughest job markets in years. Phil and Darrell cover what to build before you ever need a job: network, dream company lists, freelance income, AI side projects. Then it shifts to the tactical mechanics of finding roles most candidates never see. From the Ashby Google search hack to VC job boards, staffing firm pipelines, and stealth startup cold outreach, the counterintuitive moves are the most useful ones here. If you're currently employed, the early chapters are for you. If you're already searching, skip ahead.Why Getting Laid Off Is Always a Business Decision Being laid off in 2025 wasn't rare. It was practically routine. Amazon cut thousands. Friends pinged Darrell with news of their own layoffs while he was still processing his own. He knew what was happening across the industry. He was prepared. And then it happened anyway, and prepared turned out not to mean what he thought it meant. What comes next is something anyone who's been through it will recognize. Identity goes first. The role you've spent years building, the thing that answers "so what do you do?" at every party, just disappears overnight. Then the financial math kicks in. Darrell got 4 months of severance, which is a genuinely good outcome, and it still wasn't as much as it sounded like when he heard the number. But underneath all of it is the worst part: the replay loop. If only you'd been more visible. If only you'd taken a different project. If only you'd made that 1 relationship work. Darrell puts it plainly. Being laid off is a business decision, full stop, regardless of your performance, your visibility, or who liked you. The evidence arrived a few weeks later, when he found out that top performers across his entire organization had been cut, including his direct manager, someone who was by any measure visible, impactful, and doing everything right. When your boss gets laid off, there was nothing you could have done. "If you've never been laid off before, you can't help but think it's your fault. The big feeling is: if only I had done something different, if only I was more visible, if only I had taken a different project. And that is just 100% not true. It is all business decisions." Phil's been there. He's been let go in his own career and knows exactly how the severance window tricks you. You have a little runway, so you tell yourself you'll take a month to decompress before getting back into it. That's the trap. The best advice Darrell got came from friends who had already navigated their own layoffs, and it was blunt: don't take a break. His instinct was to take a month off, maybe 2, then ease back in. The people who'd lived it told him something he didn't want to hear: it's going to take exactly that long just to get into pipelines. And while you're recovering, everyone else with the same resume and the same experience is making the same choice. You're competing against thousands of people who were also good at their jobs and also got laid off. Darrell ran full steam ahead instead. He ended up with 2 offers. How quickly you start matters more than how long you prepared. The people who figure that out in the first 48 hours have a real structural advantage over everyone else grieving on their couch. Key takeaway: Start your job search the week you're laid off. Reach out to friends who've been through it, get your materials ready, and get into pipelines immediately. Everyone else is planning to take a few weeks off first, and that gap is your only real competitive edge right now. -- Building Career Security Before You Need It Most people don't think about their next job until they lose the current 1. Full-time employment gives you a title and a salary, but career security is something you build separately, independent of any employer. You have benefits, recurring income, and a professional identity anchored to a single org chart, and the company can end any of that without notice. Here's how Phil thinks about it: an active network you can activate, something generating income outside your primary job, and a professional reputation that doesn't disappear when the org chart does. The companies that describe themselves as families are also the ones making headcount decisions when the numbers stop working. Your actual family is at home. This leads to 3 strategies Phil argues everyone in martech should be pursuing whether or not they're actively looking. First: nurture your network. Second: follow your dream companies. Third: do something outside your 9 to 5, whether freelancing, a side project, or anything with even a small amount of income attached. These aren't strategies for when you're in trouble. They're strategies for ensuring you never are. Roles in martech and marketing ops are among the first cut when companies reduce overhead. Operators who treat their current employment as permanent are more exposed than they realize. "I've always been a fan of this stoic concept called the pre-mortem. In good times, imagine the worst-case scenario and work out what you'd do. I had always thought: what would happen if I lost my job? I knew I had a big network. So I wasn't as worried. But for many people, networking isn't something you do regularly." Key takeaway: Treat career security as a separate goal from job security. Map 3 building blocks: an active network, at least 1 income source outside your employer, and a professional identity that exists independent of your title. Start with whichever is weakest right now. -- Networking Before You Need Anyone's Help The most common networking advice is to reach out when you're looking for work. Darrell's approach is the opposite: build and maintain relationships constantly, so the network already exists before you ever need it. "I'm always asking for help. If you're figuring out how to integrate a tool, if you're figuring out why your database is so messed up, how are you not reaching out to people in a similar job and saying, hey, what are you doing? Because that, to me, is just a waste of time not to do that." His entry point is the Stoic concept of the pre-mortem. In good times, imagine the worst case scenario and work out what you'd do. For Darrell, that meant regularly asking himself what would happen if he lost his job tomorrow. Because he'd been building across martech and marketing ops communities for years before his layoff, the answer was already in place. He acknowledges most people don't approach it this way. What his version of ongoing networking looks like is less grand than the concep...

    58 min
  5. 214: Austin Hay: Claude Code is creating a new class of elite marketers and the mental models that make it click

    7 APR

    214: Austin Hay: Claude Code is creating a new class of elite marketers and the mental models that make it click

    What's up everyone, today we have the pleasure of sitting down with Austin Hay, Martech, Revtech, and GTM systems advisor, AND – AI builder, writer, and ex-founder.  In This Episode: (00:00) - Austin-audio (01:16) - In This Episode (01:54) - Sponsor: RevenueHero (02:48) - Sponsor: Mammoth Growth (04:09) - How Code-Driven AI Workflows Outperform Chat-Based Prompting (14:55) - How to Start Building With Claude Code When You Have No Time (19:45) - The Programming Concepts Non-Developers Need to Build With Claude Code (23:49) - How to Turn Repeating Prompts Into Automations That Run Themselves (31:11) - Sponsor: MoEngage (32:07) - Sponsor: Knak (33:37) - Why Spending All Your Time in Meetings Is a Career Liability (36:28) - Why the Best First Claude Code Project Is the Task That Already Annoys You (40:22) - Why T-Shaped Marketers With Claude Code Will Cover the Work of Entire Teams (46:27) - Why Marketing Taste Matters More Than Technical Skill in the AI Era (49:43) - How Early-Career Professionals Build Judgment When Entry-Level Work Gets Automated (53:14) - How Austin Hay Runs His Career as a Flywheel Austin Hay has spent 15 years moving between the technical and strategic ends of marketing, starting as the 4th employee at Branch, building and selling a mobile growth consultancy that was acqui-hired by mParticle, and eventually rising to VP of Growth before moving on to Ramp as Head of Martech. He later co-founded Clarify, a CRM startup he took from zero to $100K+ ARR while completing a Wharton MBA. Today he works as a fractional advisor to scaling companies on martech, revtech, and GTM systems, teaches thousands of practitioners through his Martech course at Reforge, and writes the Growth Stack Mafia newsletter on Substack.Austin spent months as a chatbot skeptic before Claude Code changed his view entirely. In this conversation, he maps the gap between using AI through a chat interface and wielding it as code in your actual environment, explains why meeting-heavy schedules are a compounding career liability, and makes the case for a new class of professional he calls the white collar super saiyan. --- ## How Code-Driven AI Workflows Outperform Chat-Based Prompting Most marketers use AI the same way they used Google in 2005. Open the interface, type something in, read what comes back, copy it somewhere. Austin Hay did this for months. He was not an early Claude Code adopter. He says this upfront, almost as a confession. He thought it was another chatbot. What broke him was specific. He was querying financial data at his startup, Clarify, through Runway, an FP&A platform connected to QuickBooks. Every SQL change required the same round trip: write the query in terminal, copy it to Claude, get feedback, paste it back, run it. He built a folder just to manage the back-and-forth. The model couldn't see his local files. The chat UI had upload limits. He was stuck in what he calls a world of calling and answering. Functional. But slow. And bounded in a way you eventually stop ignoring. Claude Code gave him access. When you type claude in a terminal, the model reads your actual files — the data as it lives in your repository, not a paste you copied, not a summary you wrote. It runs commands against your system, observes what happens, and acts on the result. The round trip ends. You stop relaying information and start working in the same environment. That is a different thing than a smarter chatbot. The shift combined with several unlocks arriving at once: Opus as a model, MCPs that worked reliably, a Max plan that made unlimited credits economical, and an agent architecture built around memory files and commands. All of it hit critical mass for Austin in January. He says the last 6 months felt like 3 years. You can hear in how he talks about it that he means it. The 2 chasms he had written about in his newsletter turned out to be real and distinct. Adopting AI at all is chasm 1. Crossing from chat to code is chasm 2. Most practitioners have cleared the first. Almost none have cleared the second. And the view from the other side, Austin says, is unrecognizable. > "It's this culmination of many things that I think really hit this critical mass in about January of this year." Key takeaway: Install Claude Code, open a terminal, point it at a folder with files you actually work with — SQL queries, drafts, data exports, notes — and run a real task on them. The gap between giving AI access to your environment and describing your environment through a chat window is immediate and felt, and that feeling is what changes the mental model. --- ## How to Start Building With Claude Code When You Have No Time The time problem is real. You have a 9-to-5. Your weekends disappear. Nobody at your company is running AI hackathons. "Learn the command line" is not advice you can act on between your Thursday syncs. Austin doesn't dismiss this. But he points at the part most people miss: they know step 1 (chat interface) and they see step 3 (Claude Code in terminal) and they conclude the gap is too wide. Step 2 exists. And step 2 is where everything clicks. Anthropic's rollout is layered deliberately. Chat first: ask a question, read the answer, copy the output. Cowork space second: Claude works inside a folder on your computer, local or cloud-based, and you're giving it real files to act on. Coding interface third: terminal, commands, agents. The cowork space is a distinct step with its own payoff. It's where the model stops being a question-answering machine and becomes an environment you work inside. > "Once people understand that Claude lives in a folder on your computer and you can throw stuff in that folder and have it work for you — that's the next step." When you upload documents inside a Claude project and ask it to work on them, you learn something you can't get from chat: Claude lives in a folder. It acts on what's in front of it. That sounds obvious. It does not feel obvious until you've done it. And once you feel it, the jump from cowork to terminal starts feeling like a small step forward rather than a cliff. Where this leads, eventually, is automation that runs without you. A cron job fires at 6am. A script processes your data. A workflow runs in the cloud while you're on a call or asleep. Austin maps the progression clearly: folder on your machine, then a local cron, then a cloud-deployed process that runs continuously. The people building now are building the muscle memory to get there faster. You don't have to start in the deep end. But you have to start somewhere. Key takeaway: Start in Claude's cowork space, not the terminal. Upload a folder of documents you already work with regularly — meeting notes, a newsletter draft, recurring reports, templates — and ask Claude to perform a real task on them. That interaction builds the foundational mental model before you write a single line of code. --- ## The Programming Concepts Non-Developers Need to Build With Claude Code Austin has been saying "learn the command line" for a decade. That advice predates AI by years. The reason it matters now is completely different from the reason it mattered then. The 3 foundations: command line (how computers work), object orientation (how APIs work), one programming language (how the web works). You don't need to master any of them. You need to understand them. Because without that base layer, you can use the tools that exist today, but you can't evaluate what Claude does when it uses them on your behalf. > "When you have those 3 things, you can teach yourself anything." That's the real value. When you...

    1hr 3min
  6. 213: John Whalen: The next marketing advantage is pre-testing ideas on synthetic users

    31 MAR

    213: John Whalen: The next marketing advantage is pre-testing ideas on synthetic users

    What’s up everyone, today we have the pleasure of sitting down with Dr. John Whalen, Cognitive Scientist, Author, and Founder at Brilliant Experience. Summary: John has spent his career studying how people actually think, and his conclusion is uncomfortable for anyone who believes their marketing decisions are more rational than they are. In this episode, John explores how synthetic users built from cognitive science principles can fill the massive research gap that most teams quietly ignore, and why removing the human interviewer from the room might be the fastest way to finally hear the truth. In this Episode… (00:00) - Intro (01:13) - In This Episode (04:31) - What Are Synthetic Users and Why Do They Matter? (10:00) - How Synthetic Users Make Stakeholders Hungry for Real Human Research (15:56) - Pre-Testing on Synthetic Users: Shortcut or Smart Step? (18:53) - How to Actually Build a Synthetic User: Tools, Layers, and Agentic Systems (40:51) - Is the Average Persona Dead? Scale, Diversity, and the World Model (43:01) - Asking the Uncomfortable Questions: What AI Agents Reveal That Humans Won't (49:30) - Ending the Quant vs. Qual Debate with Statistically Relevant Qualitative Data (56:37) - Mining the 'Why' Behind Silent Behavioral Data with Synthetic Users (01:02:31) - Designing for Agent Users: The Coming Shift to Human-and-Machine-Centered Design (01:05:28) - The Happiness Question: Dogs, Nature, and Staying Analog About JohnDr. John Whalen is a Cognitive Scientist, Author, and Founder of Brilliant Experience, where he applies cognitive science principles to help organizations design products and experiences that align with how people actually think and make decisions. He’s also an educator, teaching two AI customer research courses on Maven.His work explores the intersection of human psychology and marketing, including the emerging practice of pre-testing ideas on synthetic users to give brands a faster and more informed competitive edge. He is also the author of a book on the science of designing for the human mind, bringing academic rigor to practical business challenges. How Synthetic User Research Works and When to Trust ItSynthetic user research sounds like something creepy out of a dystopian science fiction film, and John is the first to admit the terminology does nobody any favors. When asked about what synthetic users actually are and what they mean for research, he admited: if he had been on the branding team, he would have pushed hard for something like “dynamic personas” instead. The name creates unnecessary friction before the conversation even starts. And that friction matters when you’re trying to get skeptical executives or methiculous researchers to take the whole thing seriously. Under the hood, specialized AI tools simulate how a defined audience segment would respond to a question, concept, or stimulus, without recruiting, scheduling, incentivizing, or waiting on real human participants. John runs a class where he collects genuine human data first, then feeds comparable inputs into these tools to benchmark accuracy head-to-head. The results are pretty wild. AI-generated responses align with real human findings somewhere between 85% and 100% of the time on major topics and consumer needs. That is not a peer-reviewed clinical trial, and John is not pretending otherwise. But 85% alignment is enough signal to stop reflexively dismissing the method and start asking harder, more specific questions about exactly where it fits into a research stack. So what does this mean for you and your company though? Think all the decisions that currently live in a black hole of zero structured input. How many product calls, campaign concepts, and messaging pivots happen with nothing more than a conference room full of people who all read the same talking heads on LinkedIn? John argues that low cost, round-the-clock accessibility, and minimal public exposure make these tools a natural fit for precisely those moments: pressure-checking a hypothesis at 11pm, testing whether a pitch direction even makes sense before it touches a client, or deciding whether a concept deserves the time and money required for proper validation. “If these are only going to keep getting better and better, which they are, then logically, what kinds of decisions right now go completely by gut and no research, and what could we use to help us frame that?” One of the more underappreciated angles John raises is global inclusivity. Large organizations routinely test in the US and Western Europe, then extrapolate those findings to markets in Southeast Asia, Latin America, or Sub-Saharan Africa because local research budgets simply do not exist. Big nono. Synthetic personas trained on broader, more representative data could at minimum provide directional signals for those markets, making research more geographically honest without a proportional spike in spend. The early AI bias problem, where models essentially mirrored the worldview of a narrow, tech-adjacent demographic slice, was real and valid and well-documented. But training data keeps expanding, and the gap between “Silicon Valley assumption” and “what people in Nairobi or Jakarta actually think” is narrowing in ways that deserve acknowledgment. Key takeaway: Synthetic user research earns its place not as a replacement for real human data, but as a low-cost, always-available pressure valve for the enormous volume of decisions that currently happen with no research input at all, so before you dismiss it as gimmicky, ask yourself honestly how many of your last ten strategic calls were backed by anything more rigorous than internal consensus. How Synthetic Users Make Stakeholders Want More Real Human ResearchThos big hairy static research decks have a fundamental limitation that anyone who has sat through a stakeholder presentation already understands. You hand over a slide deck, someone reads it, and then three days later they have five more questions you can’t answer without going back to the field. Brutal feeling. Interrogating a Live PersonaJohn argues that synthetic users solve this problem in a surprisingly indirect way: when a stakeholder can keep interrogating a live AI persona, the conversation never closes. They start poking at the model, asking things like “would you like this?” or “why would you feel that way about that?” and somewhere in that process, something shifts. They stop treating research as a report and start treating it as a living, always-on thing. What John has observed across a half-dozen client engagements is that this interactivity makes leaders ravenous for it. His team positions synthetic user outputs as directional, explicitly not as data, closer to hypothesis generation than validation. But still cray valuable. When a stakeholder gets genuinely excited about a pattern they’re seeing in a synthetic persona, the natural next thought tends to be “if this could actually be true, we need to go test it with real humans.” The synthetic user functions as a preview of the variance you might find in the field, not a substitute for going there. “Think of this as almost a preview of what you could have with your humans. So you’re being more prepared for what might be to come, what might be the distribution of different responses.” Instant ReactionsThere’s a second use case John describes, about discovering new questions. When a stakeholder first sits down to scope a research project, they often don’t know what they’re actually asking. Spinning up a synthetic user in the room and throwing that rough, half-formed question at it live tends to produce a response the stakehold...

    1hr 8min
  7. 212: Tobias Konitzer: The Causal AI revolution and the boomerang effect in marketing decision science

    24 MAR

    212: Tobias Konitzer: The Causal AI revolution and the boomerang effect in marketing decision science

    Summary: Tobi challenged marketing’s fixation on prediction. He has built highly accurate LTV models, but accuracy alone does not move revenue. Marketing is intervention. Correlation shows patterns; causality tells you what happens when you pull a lever. That shift reshapes experimentation, explains why dynamic allocation can outperform static A B tests, and highlights how self learning systems can backfire or get stuck in local maxima. It also fuels his skepticism of unleashing agentic AI on historical data without a causal layer. If you want to change outcomes instead of forecast them, your systems need to understand levers and log decisions you can actually audit. (00:00) - Intro (01:22) - In This Episode (04:07) - Why Predictive Models Fail Without Causal Inference (09:49) - How to Validate Causal Impact on Customer Lifetime Value (13:04) - Reducing Uncertainty Around Causal Effects by Optimizing Levers, Not Labels (17:01) - Why Dynamic Allocation Works Better Than Fixed Horizon A B Testing (31:54) - The Boomerang Effect and Why Uninformed AI Sabotages Early Results (40:15) - Escaping Local Maxima and The Failure of Randomly Initialized Decisioning (44:04) - Why Agentic AI Trained on Data Warehouse Correlations Reinforces Bias (49:00) - The Power of Composable Decisioning (53:06) - How Machine Decisioning Transcends Marketing (01:01:41) - Why Clear Priority Hierarchies Improve Executive Decision Making About TobiasTobias Konitzer, PhD is VP of AI at GrowthLoop, where he’s chasing closed-loop marketing powered by reinforcement learning, causality, and agentic systems. He’s spent the past decade focused on one core problem: moving beyond prediction to actually influencing outcomes. Previously, Tobi was Chief Innovation Officer at Fenix Commerce, helping major eCommerce brands modernize checkout and delivery with machine learning. He also founded Ocurate, a venture-backed startup that predicted customer lifetime value to optimize ad bidding in real time, raising $5.5M and scaling to $500K+ ARR before its acquisition. Earlier, he co-founded PredictWise, building psychographic and behavioral targeting models that drove over $2M in revenue. Tobi earned his PhD in Computational Social Science from Stanford and worked at Facebook Research on large-scale ML and bias correction. Originally from Germany and based in the Bay Area since 2013, he writes frequently about causal thinking, machine decisioning, and the future of marketing. Why Predictive Models Fail Without Causal Inference Prediction dominates most marketing roadmaps. Teams invest months refining churn models, tightening confidence intervals, and debating which threshold deserves a campaign. Tobi built an entire company on that logic. His team produced highly accurate lifetime value predictions using deep learning and granular event data. The forecasts were sharp. The lift curves were clean. Buyers were impressed. Then lifecycle marketers asked a more uncomfortable question: what action should follow the score? A predictive model encodes the current trajectory of a customer under existing policies. It describes what will likely happen if nothing changes. Marketing changes things constantly. The moment you intervene, you alter the system that generated the prediction. The forecast reflects yesterday’s conditions, not tomorrow’s strategy. > “Prediction tells you the future if you do nothing. Causation tells you how to change it.” Consider the Prediction Trap. On the left, the status quo labels a person as high churn risk. The function is observation. The outcome is a description of what happens if you leave the system untouched. On the right, a lever gets pulled. The function is intervention. The outcome is directional change. That shift in function changes how you work. Prediction thinking centers on segmentation: Who is likely to churn?Who is likely to buy?Who looks like high LTV? Causal thinking centers on levers: Which incentive reduces churn?Which sequence increases repeat purchase?Which offer raises lifetime value incrementally? Tobi often uses an LTV example to expose the trap. Suppose high LTV customers frequently viewed a specific product early in their journey. A team might redesign the onboarding flow to feature that product more aggressively. The correlation looks persuasive. The causal effect remains unknown. Several alternative explanations could drive the pattern: The product may correlate with a specific acquisition channel.The product may have been highlighted during a limited campaign.The product view may signal prior brand familiarity. Only an intervention test can estimate incremental impact. Correlation can guide hypothesis generation, but it cannot validate the lever itself. Tobi also highlights a deeper issue. Acting on predictions introduces compounding uncertainty across multiple layers: The predictive model carries statistical variance.The translation from model features to campaign strategy introduces interpretation bias.The experiment introduces sampling error.Execution introduces operational noise. Each layer adds variability. When teams treat prediction accuracy as the goal, they lose visibility into where uncertainty enters the system. When teams focus on intervention impact, they concentrate measurement on the lever that drives revenue. Boardrooms already operate in causal language. Incremental ROI is causal. Budget allocation is causal. Executives care about what caused growth, not which segment looked promising in a dashboard. Prediction can inform prioritization. Causal inference determines what to scale. If you want to move in that direction, adjust your operating model: Start every initiative with a controllable lever.Define the action before defining the segment.Design experiments that isolate the incremental effect of that lever.Randomized or adaptive allocation both estimate causal lift.Report impact in revenue, retention, or contribution margin.Tie every experiment to a business outcome.Document assumptions and uncertainty.Build institutional memory around what caused change. Prediction remains useful. Intervention drives growth. Teams that understand that distinction build systems that learn through action instead of watching the future unfold from the sidelines. Key takeaway: Anchor your marketing engine in causal experiments. For every predictive score, define the specific action it informs, test that action against a control, and quantify incremental lift tied directly to revenue or retention. Replace segment rankings with lever performance dashboards that show effect size, confidence, and business impact. When every campaign answers the question “What did this intervention cause?” your team shifts from observing trajectories to shaping them. How to Validate Causal Impact on Customer Lifetime Value Most teams treat high LTV segments as proof of where to spend. The model ranks customers. The top decile looks profitable. Budget flows upward. Tobi described asking the head of CRM at a billion dollar outdoor brand what he does when a model predicts someone will be high LTV. The answer came instantly: Spend more on them, no? That instinct feels responsible. It also confuses observation with intervention. Introducing the high LTV Fallacy: On the right side of the chart, you see a dense cluster labeled high LTV customers. Revenue increases with marketing spend. The correlation line slopes upward. It looks clean and convincing. They were going to buy anyway. That cluster may represent customers with higher income, stronger brand affinit...

    1hr 5min
  8. 211: Jenna Kellner: Overcoming frankenstacks and AI uncertainty with first principles and business judgement

    17 MAR

    211: Jenna Kellner: Overcoming frankenstacks and AI uncertainty with first principles and business judgement

    What’s up everyone, today we have the pleasure of chatting with Jenna Kellner, VP Marketing at Workleap. (00:00) - Intro (01:14) - In This Episode (04:30) - How to Manage Marketing Tech Debt During Rapid Growth (10:10) - How to Prioritize RevOps Tech Debt Without Perfect ROI Models (14:23) - Reasoning Through Broken Systems and Imperfect Data (19:23) - How High Performers Progress Anyway (24:28) - How to Build Confidence With AI Through Small Experiments (33:06) - How to Use Exit Planning and Cost Benefit Analysis for AI Tool Selection (35:57) - First principles matter more than tools (38:59) - Why Staying Close to Execution Improves Marketing Leadership (45:13) - Why Critical Thinking Skills Drive Marketing Career Growth (49:33) - How to Build Business Judgment in Technical Marketing Roles (53:03) - Why Confidence Without Humility is Dangerous (55:47) - How Revenue Leaders Prioritize Daily Energy (59:49) - Growing up (01:01:10) - Book rec Summary: Jenna is a VP of marketing that can talk about the weeds of messy systems, uncertain decisions, and personal growth. You can’t hide from it, every company accumulates tech debt as teams rush to hit revenue targets. She frames tech debt as a leadership responsibility and urges executives to reinvest in core systems when patchwork begins to outweigh building. If leadership doesn’t get it, the best way to prioritize it is to shape it as an opportunity cost and lost leverage that will drain revenue the longer we wait. In the face of AI uncertainty, she argues that judgment compounds faster than technical knowledge, and that the marketers who become indispensable blend business awareness, proximity to execution, and decisive action grounded in humility.About Jenna Jenna Kellner is Vice President of Marketing at Workleap and a revenue-focused marketing leader who has spent more than a decade building marketing teams and scaling companies. She brings experience across Enterprise, SMB, D2C, SaaS, two-sided marketplaces, venture studios, and other high-growth environments. Her career spans senior leadership roles at Minerva, On Deck, RBCx, and Ownr, where she led marketing, growth, and revenue functions inside complex, evolving organizations. At RBCx, she served as Chief Growth Officer for Ampli and directed marketing and growth initiatives within a large financial institution setting. She has also co-founded communities such as GrowthToronto and Little Traders, reflecting her commitment to building networks and businesses in parallel. Jenna operates with a strong sense of ownership and accountability, grounded in her belief that every challenge ultimately becomes her responsibility to solve. Recognized as a WXN Top 100 Women in Canada, she focuses on developing high-performing teams that connect strategy to execution and translate marketing into measurable revenue impact. The Frankenstein Reality of Managing Tech Debt: How to Manage Marketing Tech Debt During Rapid Growth You know it.. Most marketers are operating inside half-connected systems. No company has a pristine, perfectly synchronized tech stack. Even if they think they do, it doesn’t last. Growth creates pressure, and pressure produces shortcuts.  Jenna has seen the same cycle in startups and enterprise environments. In the early days, teams build whatever gets the job done. They start in spreadsheets, layer on point solutions, wire tools together with lightweight integrations, and move fast because revenue matters more than architecture. Those early decisions never disappear. They compound. Years later, larger organizations inherit layers of systems that were added at different stages of maturity. Tools do not scale in sync. One platform gets upgraded. Another stays frozen because a team depends on it. Reporting becomes an exercise in orchestration. Jenna recalls walking into an organization where a sales leader pulled her weekly report from eight separate tools. That routine consumed time, drained energy, and normalized operational friction. “You have to Frankenstein your way through them to get the answers you need.” That sentence captures the daily reality inside many marketing and revenue teams. Quarter-end reporting still happens. Board decks still go out. The numbers get assembled through exports, CSV files, manual joins, and late-night reconciliation. Leadership often tolerates the strain because revenue continues to land. But the cost isn’t super visible: Reporting cycles stretch longer each quarter.Forecast confidence erodes.Team morale dips as manual work expands.Strategic decisions rely on partial or inconsistent data.So how do we get out of this mess? Jenna views this as a leadership obligation. Someone has to decide that cleaning house earns priority alongside pipeline generation. She describes working with a founder who paused other initiatives to repair core systems. The work moved slowly. It required budget discipline and uncomfortable trade-offs. It rebuilt trust in data and freed leaders from cobbled-together dashboards. She compares the stack to a house. Repairs never end, but neglect guarantees structural damage. Leaders choose whether maintenance becomes routine or deferred risk. Key takeaway: Treat marketing and sales tech debt as a leadership responsibility, not an ops inconvenience. Schedule deliberate cleanup cycles, secure executive buy-in early, and protect time and budget to rebuild core systems before the drag on revenue, morale, and reporting compounds beyond control. Prioritizing RevOps Tech Debt Without Perfect ROI Models Just get buy-in to fix all of our tech debt… myeah… sounds great. Good luck convincing your leadership team who’s off chasing the next AI tool they just read about on LinkedIn. Just assign a dollar figure to it, doesn’t have to be perfect, just guestimate it. Someone is building a report by hopping across eight tools, copying fields, reconciling numbers. You can measure the hours. You can attach a salary. You still miss the real cost. Jenna takes a different approach.  She’s not a fan of squeezing every system fix into an artificial ROI model. She focuses on the role RevOps plays in revenue creation. She says it directly: “The job is to enable sales and marketing to find patterns, to hunt better, to run better campaigns and plays, to drive stronger revenue.” When RevOps becomes a reporting service desk, capacity shrinks. The team spends its energy on maintenance rather than momentum. The opportunity cost compounds quietly. High leverage work stalls, including: Designing sharper segmentation models.Identifying conversion bottlenecks across funnel stages.Equipping sales with data driven plays that improve win rates.You feel the drag in slower experiments and reactive decision making. Pipeline velocity flattens. Leadership wonders why growth feels harder than it should. The urge to quantify every hour saved can trap teams in defensive mode. You start arguing over whether saving ten hours per week justifies a cleanup project. You try to forecast the dollar value of future pattern recognition. That debate rarely captures the structural risk of lagging systems. Jenna frames it as a leadership judgment call grounded in timing and context. If headwinds are rising, if competitors are shipping faster, if your team spends more time patching than building, the signal is strong enough. She points to industries that invested early in overhauling core systems. Airlines that modernized their tech stack gained operat...

    1hr 2min

About

Future-proofing the humans behind the tech. Follow Phil Gamache and Darrell Alfonso on their mission to help future-proof the humans behind the tech and have successful careers in the constantly expanding universe of martech.

You Might Also Like