The Tech Trek

Elevano

The Tech Trek is a podcast for founders, builders, and operators who are in the arena building world class tech companies. Host Amir Bormand sits down with the people responsible for product, engineering, data, and growth and digs into how they ship, who they hire, and what they do when things break. If you want a clear view into how modern startups really get built, from first line of code to traction and scale, this show takes you inside the work.

  1. Cloud Costs vs AI Workloads, The Storage Decisions That Decide Scale

    13H AGO

    Cloud Costs vs AI Workloads, The Storage Decisions That Decide Scale

    Cloud bills are climbing, AI pipelines are exploding, and storage is quietly becoming the bottleneck nobody wants to own. Ugur Tigli, CTO at MinIO, breaks down what actually changes when AI workloads hit your infrastructure, and how teams can keep performance high without letting costs spiral. In this conversation, we get practical about object storage, S3 as the modern standard, what open source really means for security and speed, and why “cloud” is more of an operating model than a place. Key takeaways • AI multiplies data, not just compute, training and inference create more checkpoints, more versions, more storage pressure • Object storage and S3 are simplifying the persistence layer, even as the layers above it get more complex • Open source can improve security feedback loops because the community surfaces regressions fast, the real risk is running unsupported, outdated versions • Public cloud costs are often less about storage and more about variable charges like egress, many teams move data on prem to regain predictability • The bar for infrastructure teams is rising, Kubernetes, modern storage, and AI workflow literacy are becoming table stakes Timestamped highlights 00:00 Why cloud and AI workloads force a fresh look at storage, operating models, and cost control 00:00 What MinIO is, and why high performance object storage sits at the center of modern data platforms 01:23 Why MinIO chose open source, and how they balance freedom with commercial reality 04:08 Open source and security, why faster feedback beats the closed source perception, plus the real risk factor 09:44 Cloud cost realities, egress, replication, and why “fixed costs” drive many teams back inside their own walls 15:04 The persistence layer is getting simpler, S3 becomes the standard, while the upper stack gets messier 18:00 Skills gap, why teams need DevOps plus AIOps thinking to run modern storage at scale 20:22 What happens to AI costs next, competition, software ecosystem maturity, and why data growth still wins A line worth keeping “Cloud is not a destination for us, it’s more of an operating model.” Pro tips for builders and tech leaders • If your AI initiative is still a pilot, track egress and data movement early, that is where “surprise” costs tend to show up • Standardize around containerized deployment where possible, it reduces the gap between public and private environments, but plan for integration friction like identity and key management • Treat storage as a performance system, not a procurement line item, the right persistence layer can unblock training, inference, and downstream pipelines What's next: If you’re building with AI, running data platforms, or trying to get your cloud costs under control, follow the show and subscribe so you do not miss upcoming episodes. Share this one with a teammate who owns infrastructure, data, or platform engineering.

    26 min
  2. AI Is Changing Art Faster Than You Think.

    3D AGO

    AI Is Changing Art Faster Than You Think.

    This is an early conversation I am bringing back because it feels even more relevant now, the intersection of AI and art is turning into a real cultural shift. I sit down with Marnie Benney, independent curator at the intersection of contemporary art and technology, and co-founder of AIartists.org, a major community for artists working with AI. We talk about what AI art actually is beyond the headlines, where authorship gets messy, and why artists might be the best people to pressure test the societal impact of machine learning. Key takeaways • AI in art is not a single thing, it is a spectrum of choices, dataset, process, medium, and intent • The most interesting work treats AI as a collaborator, not a shortcut, a back and forth that reshapes the artist’s decisions • Authorship is still unsettled, some artists see AI as a tool like an instrument, others treat it as a creative partner • The fear that AI replaces creativity misses the point, artists can use the machine’s unexpected output to expand human expression • Access matters, compute, tooling, and collaboration between artists and technologists will shape who gets to experiment at the frontier Timestamped highlights 00:04:00 Curating science, climate, and public engagement, the path into tech driven exhibitions 00:07:41 What AI art can mean in practice, datasets, iteration loops, and choosing an output medium 00:10:48 Who gets credit, tool versus collaborator, and the art world’s evolving rules 00:13:51 Fear, job displacement, and a healthier frame, human plus machine as a creative partnership 00:22:57 The new skill stack, what artists need to learn, and where collaboration beats handoffs 00:29:28 The pushback from traditional art circles, philosophy and intention versus novelty 00:37:17 Inside the New York exhibition, collaboration between human and machine, visuals, sculpture, and sound 00:48:16 The magic of the unknown, why the output can surprise even the artist A line that stuck “Artists are largely showing a mirror to society of what this technology is, for the positive and the negative.” Pro tips for builders and operators • Treat creative communities as an early signal, artists surface second order effects before markets do • If you are building AI products, study authorship debates, they map directly to credit, accountability, and trust • Collaboration beats delegation, when domain experts and technologists iterate together, the work gets sharper fast Call to action If this episode hits for you, follow the show so you do not miss the next drop. And if you are building in data, AI, or modern tech teams, follow me on LinkedIn for more conversations that connect technology to real world impact.

    51 min
  3. AI in the Enterprise, Why Pilots Fail and What Actually Scales

    4D AGO

    AI in the Enterprise, Why Pilots Fail and What Actually Scales

    Most teams are approaching AI from the wrong direction, either chasing the tech with no clear problem or spinning up endless pilots that never earn their keep. In this episode, Amir Bormand sits down with Steve Wunker, Managing Director at New Markets Advisors and co author of AI and the Octopus Organization, to break down what actually works in enterprise AI. You will hear why the real challenge is organizational, not technical, how IT and business have to co own the outcome, and what it takes to keep AI systems valuable over time. If you are trying to move beyond experimentation and into real impact, this conversation gives you a practical blueprint. Key takeaways • Pick a handful of high impact problems, not hundreds of small pilots, focus is what creates measurable ROI • Treat AI as a workflow and change program, not a tool you bolt onto an existing process • IT has to evolve from order taker to strategic partner, including stronger AI ops and ongoing evaluation • Start with the destination, redefine the value proposition first, then redesign the operating model around it • Ongoing ownership matters, AI is not a one and done delivery, it needs stewardship to stay useful Timestamped highlights 00:39 What New Markets Advisors actually does, innovation with a capital I, plus AI in value props and operations 01:54 The two common mistakes, pushing AI everywhere and launching hundreds of disconnected pilots 04:19 Why IT cannot just take orders anymore, plus why AI ops is not the same as DevOps 07:56 Why the octopus is the perfect model for an AI age organization, distributed intelligence and rapid coordination 11:08 The HelloFresh example, redesign the destination first, then let everything cascade from that 17:37 The line you will remember, AI is an ongoing commitment, not a project you ship and forget 20:50 A cautionary pattern from the dotcom era, avoid swinging from timid pilots to extreme headcount mandates A line worth keeping You cannot date your AI system, you need to get married to it. Pro tips for leaders building real AI outcomes • Define success metrics before you build, then measure pre and post, otherwise you are guessing • Redesign the process, do not just swap one step for a model, aim for fewer steps, not faster steps • Assign long term ownership, budget for maintenance, evaluation, and model oversight from day one Call to action If this episode helped you rethink how to drive AI results, follow the show and subscribe so you do not miss the next conversation. Share it with a leader who is stuck in pilot mode and wants a path to production.

    24 min
  4. AI Is Rewriting Manufacturing Quality, Here’s What Changes

    5D AGO

    AI Is Rewriting Manufacturing Quality, Here’s What Changes

    Manufacturing is getting faster, messier, and more expensive when quality slips. Daniel First, Founder and CEO at Axion, joins Amir to break down how AI is changing the way manufacturers detect issues in the field, trace root causes across messy data, and shorten the time from “customers are hurting” to “we fixed it.” Episode Summary Daniel First, Founder and CEO at Axion, explains why modern manufacturing is living in the bottom of the quality curve longer than ever, and how AI can help companies spot issues early, investigate faster, and actually close the loop before warranty costs and customer trust spiral. If you work anywhere near hardware, infrastructure, or complex systems, this is a sharp look at what “AI first” means when real products fail in the real world. You will hear why quality is becoming a competitive weapon, how unstructured signals hide the truth, and what changes when AI agents start doing the detection, investigation, and coordination work humans have been drowning in. What you will take away Quality is not just a defect problem, it is a speed and trust problem, especially when product cycles keep compressing. AI creates leverage by pulling together signals across the full product life cycle, not by sprinkling a chatbot on one system. The fastest teams win by finding issues earlier, scoping impact correctly, and fixing what matters before customers notice the pattern. A clear ROI often lives in warranty cost avoidance and downtime reduction, not just “efficiency” metrics. “AI first” gets real when strategy becomes operational, and contradictions in how teams prioritize issues get exposed. Timestamped highlights 00:00 Why manufacturing is a different kind of problem, and why speed is harder than it looks 01:10 What Axion does, and how it detects, investigates, and resolves customer impacting issues 05:10 The new reality, faster product cycles mean living in the bottom of the quality curve 10:05 Why it can take hundreds of days to truly solve an issue, and where the time disappears 16:20 How to evaluate AI vendors in manufacturing, specialization, integrations, and cross system workflows 22:40 The shift coming to quality teams, from reading data all day to making higher level decisions 28:10 What “AI first” looks like in practice, and how AI exposes misalignment across teams A line worth repeating “Humans are not that great at investigating tens of millions of unstructured data points, but AI can detect, scope, root cause, and confirm the fix.” Pro tips you can apply When evaluating an AI solution, ask three questions up front: how specialized the AI must be, whether you need a full workflow solution or just an API, and whether the use case spans multiple systems and teams. Treat early detection as a first class objective, the longer the accumulation phase, the more cost and customer damage you silently absorb. Align issue prioritization to strategy, not just frequency, cost, or the loudest internal voice. Follow: If this episode helped you think differently about quality, speed, and AI in the real world, follow the show on Apple Podcasts or Spotify so you do not miss the next one. If you want more conversations like this, subscribe to the newsletter and connect with Amir on LinkedIn.

    25 min
  5. Synthetic Data Explained, When It Helps AI and When It Hurts

    6D AGO

    Synthetic Data Explained, When It Helps AI and When It Hurts

    Synthetic data is moving from a niche concept to a practical tool for shipping AI in the real world. In this episode, Amit Shivpuja, Director of Data Product and AI Enablement at Walmart, breaks down where synthetic data actually helps, where it can quietly hurt you, and how to think about it like a data leader, not a demo builder. We dig into what blocks AI from reaching production, how regulated industries end up with an unfair advantage, and the simple test that tells you whether synthetic data belongs anywhere near a decision making system. Key Takeaways • AI success still lives or dies on data quality, trust, and traceability, not model hype. • Synthetic data is best for exploration, stress testing, and prototyping, but it should not be the backbone of high stakes decisions. • If you cannot explain how an output was produced, synthetic only pipelines become a risk multiplier fast. • Regulated industries often move faster with AI because their data standards, definitions, and documentation are already disciplined. • The smartest teams plan data early in the product requirements phase, including whether they need synthetic data, third party data, or better metadata. Timestamped Highlights 00:01 The real blockers to getting AI into production, data, culture, and unrealistic scale assumptions 03:40 The satellite launch pad analogy, why data is the enabling infrastructure for every serious AI effort 07:52 Regulated vs unregulated industries, why structure and standards can become a hidden advantage 10:47 A clean definition of synthetic data, what it is, and what it is not 16:56 The “explainability” yardstick, when synthetic data is reasonable and when it is a red flag 19:57 When to think about data in stakeholder conversations, why data literacy matters before the build starts A line worth sharing “AI is like launching satellites. Data is the launch pad.” Pro Tips for tech leaders shipping AI • Start data discovery at the same time you write product requirements, not after the prototype works • Use synthetic data early, then set milestones to shift weight toward real world data as you approach production • Sanity check the solution, sometimes a report, an email, or a deterministic workflow beats an AI system Call to Action If this episode helped you think more clearly about data strategy and AI delivery, follow the show on Apple Podcasts and Spotify, and share it with a builder or leader who is trying to get AI out of pilot mode. You can also follow me on LinkedIn for more episodes and clips.

    26 min
  6. The Real Learning Curve of Engineering Management

    FEB 2

    The Real Learning Curve of Engineering Management

    Tom Pethtel, VP of Engineering at Flock Safety, breaks down the real learning curve of moving from builder to manager, and how to keep your technical edge while scaling your impact through people. You will hear how Tom’s path from rural Ohio to leading high stakes engineering teams shaped his approach to leadership, hiring, and staying close to the customer. Key Takeaways ​ Promotions usually come from doing your current job well, plus stepping into the work above you that is not getting done​ Great leaders do not fully detach from the craft, they stay close enough to the work to make good calls and keep context​ Put yourself where the real learning is happening, watch customers, go to the failure point, get proximity to the source of truth​ Hiring is not only pedigree, it is fundamentals plus grit, the willingness to solve what looks hard because it is “just software”​ As you scale to teams of teams, your job becomes time allocation, jump on the biggest business fire while still making rounds everywhere Timestamped Highlights 00:32 What Flock Safety actually builds, from AI enabled devices to Drone as a First Responder 02:04 Dropping out of Georgia Tech, switching disciplines, and choosing software for speed and impact 03:30 A life threatening detour, learning you owe 18,000 dollars, and teaching yourself to build an iPhone app to survive 06:33 Why Tom values grit and non traditional backgrounds in hiring, and the “it is just software” mindset 08:46 Proximity and learning, go to the problem, plus the lessons he borrows from Toyota Production System 09:55 A practical story of chasing expertise, from Kodak to Nokia, and hiring the right leader by going where the knowledge lives 14:27 The truth about becoming a manager, you rarely feel ready, you take the seat and learn fast 19:18 Leading teams of teams, you cannot be everywhere, so you go where the biggest fire is, without neglecting the rest 22:08 The promotion playbook, stop only doing your job, start solving the next job A line worth stealing “Do your job really well, plus go do the work above you that is not getting done, that’s how you rise.” Pro Tips for engineers stepping into leadership ​ Stay technical enough to keep your judgment sharp, even if it is only five or ten percent of your week​ If you want to grow, chase proximity, sit with the customer, sit with the failure, sit with the best people in the space​ Measure your impact as leverage, if a team of ten is producing ten times, your role is not less valuable, it is multiplied​ When you lead multiple disciplines, rotate your attention intentionally, do not camp on one fire for a full year Call to Action If this episode helped you rethink leadership, share it with one builder who is about to step into management. Subscribe on Apple Podcasts, Spotify, and YouTube, and follow Amir on LinkedIn for more conversations with operators building real teams in the real world.

    26 min
  7. Retention for Engineering Teams, What Keeps Top People Around

    JAN 30

    Retention for Engineering Teams, What Keeps Top People Around

    Phil Freo, VP of Product and Engineering at Close, has lived the rare arc from founding engineer to executive leader. In this conversation, he breaks down why he stayed nearly 12 years, and what it takes to build a team that people actually want to grow with. We get into retention that is earned, not hoped for, the culture choices that compound over time, and the practical systems that make remote work and knowledge sharing hold up at scale. Key takeaways • Staying for a decade is not about loyalty, it is about the job evolving and your scope evolving with it • Strong retention is often a downstream effect of clear values, internal growth opportunities, and leaders who trust people to level up • Remote can work long term when you design for it, hire for communication, and invest in real relationship building • Documentation is not optional in remote, and short lived chat history can force healthier knowledge capture • Bootstrapped, customer funded growth can create stability and control that makes teams feel safer during chaotic markets Timestamped highlights 00:02:13 The founders, the pivots, and why Phil joined before Close was even Close 00:06:17 Why he stayed so long, the role keeps changing, and the work gets more interesting as the team grows 00:10:54 “Build a house you want to live in”, how valuing tenure shapes culture, code quality, and decision making 00:14:14 Remote as a retention advantage, moving life forward without leaving the company behind 00:20:23 Over documenting on purpose, plus the Slack retention window that forces real knowledge capture 00:22:48 Bootstrapped versus VC backed, why steady growth can be a competitive advantage when markets tighten 00:28:18 The career accelerant most people underuse, initiative, and championing ideas before you are asked One line worth stealing “Inertia is really powerful. One person championing an idea can really make a difference.” Practical ideas you can apply • If you want growth where you are, do not wait for permission, propose the problem, the plan, and the first step • If you lead a team, create parallel growth paths, management is not the only promotion ladder • If you are remote, hire for writing, decision clarity, and follow through, not just technical depth • If Slack is your company memory, it is not memory, move durable knowledge into docs, issues, and specs Stay connected: If this episode sparked an idea, follow or subscribe so you do not miss the next one. And if you want more conversations on building durable product and engineering teams, check out my LinkedIn and newsletter.

    31 min
  8. Data Orchestration and Open Source Strategy

    JAN 29

    Data Orchestration and Open Source Strategy

    Pete Hunt, CEO of Dagster Labs, joins Amir Bormand to break down why modern data teams are moving past task based orchestration, and what it really takes to run reliable pipelines at scale. If you have ever wrestled with Apache Airflow pain, multi team deployments, or unclear data lineage, this conversation will give you a clearer mental model and a practical way to think about the next generation of data infrastructure. Key Takeaways • Data orchestration is not just scheduling, it is the control layer that keeps data assets reliable, observable, and usable • Asset based thinking makes debugging easier because the system maps code directly to the data artifacts your business depends on • Multi team data platforms need isolation by default, without it, shared dependencies and shared failures become a tax on every team • Good software engineering practices reduce data chaos, and the tools can get simpler over time as best practices harden • Open source makes sense for core infrastructure, with commercial layers reserved for features larger teams actually need Timestamped Highlights 00:00:50 What Dagster is, and why orchestration matters for every data driven team 00:04:18 The origin story, why critical institutions still cannot answer basic questions about their data 00:07:02 The architectural shift, moving from task based workflows to asset based pipelines 00:08:25 The multi tenancy problem, why shared environments break down across teams, and what to do instead 00:11:21 The path out of complexity, why software engineering best practices are the unlock for data teams 00:17:53 Open source as a strategy, what belongs in the open core, and what belongs in the paid layer A Line Worth Repeating Data orchestration is infrastructure, and most teams want their core infrastructure to be open source. Pro Tips for Data and Platform Teams • If debugging feels impossible, you may be modeling your system around tasks instead of the data assets the business actually consumes • If multiple teams share one codebase, isolate dependencies and runtime early, shared Python environments become a silent reliability risk • Reduce cognitive load by tightening concepts, fewer new nouns usually means a smoother developer experience Call to Action If this episode helped you rethink data orchestration, follow the show on Apple Podcasts and Spotify, and subscribe so you do not miss future conversations on data, AI, and the infrastructure choices that shape real outcomes.

    23 min
5
out of 5
75 Ratings

About

The Tech Trek is a podcast for founders, builders, and operators who are in the arena building world class tech companies. Host Amir Bormand sits down with the people responsible for product, engineering, data, and growth and digs into how they ship, who they hire, and what they do when things break. If you want a clear view into how modern startups really get built, from first line of code to traction and scale, this show takes you inside the work.