Data in Biotech

CorrDyn
Data in Biotech

Data in Biotech is a fortnightly podcast exploring how companies leverage data to drive innovation in life sciences.  Every two weeks, Ross Katz, Principal and Data Science Lead at CorrDyn, sits down with an expert from the world of biotechnology to understand how they use data science to solve technical challenges, streamline operations, and further innovation in their business.  You can learn more about CorrDyn - an enterprise data specialist that enables excellent companies to make smarter strategic decisions - at www.corrdyn.com

  1. VOR 4 STD.

    Reflections & Predictions: One Year of Data in Biotech with Ross Katz

    In this episode of Data in Biotech, Ross Katz reflects on what he’s learned from one year of hosting the podcast. Diving deep into the intersection of data science and biotechnology, this episode covers topics like: The need for predictive models in biotech that are grounded in real-world experimentation.The challenges of biases in model evaluation and designing experiments that maximize collecting information for iterative improvements.The balance between leveraging computational methods and validating insights through experimental data.  As we look to 2025, Ross shares his vision of the emerging democratization of the biotech data ecosystem by making domain knowledge, datasets and tools more accessible. He discusses the possibility of a future where decentralized collaboration, akin to open-source software projects, can tackle specific diseases through computational pipelines and cloud labs, enabling experiments without the need for costly infrastructure. Or where emerging trends like foundation models and ensemble modeling in drug discovery, cell and gene therapy, and the role of new data from advanced imaging and assay technologies can be unlocked to create novel insights. Finally, he invites regular listeners to contribute ideas, guest suggestions and resources as we build community and embrace more curiosity and openness.  Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences.

    40 Min.
  2. 30. OKT.

    From Moderna to Dash Bio - Revolutionizing Drug Development with Dave Johnson

    In this episode of Data in Biotech, Ross Katz sits down with Dave Johnson, CEO and co-founder of Dash Bio, a next-gen drug development services company with a mission to revolutionize clinical bioanalysis and streamline drug development. Dave begins the episode by taking us back to the early research days in Moderna, where he helped lay the groundwork for mRNA technology, which later enabled the development of a vaccine for COVID-19 at unprecedented speed. As he explains, this automated work and pre-built systems ultimately played a central role in responding to urgent health challenges. He also shares his firsthand experience of working in a rapidly scaling pharma company, discussing the potential challenges that arose along the way and the lessons he learned to overcome them. Dave then proceeds to highlight the most significant insufficiencies in drug development—particularly the lack of industrialization and standardization. He explains how Dash Bio aims to address these issues, focusing on clinical bioanalysis now and expanding to broader standardization later. The goal is ultimately to develop a more efficient, high-quality end-to-end system and improve the overall efficacy of the drug development process. Finally, Dave and Ross discuss the misconceptions surrounding lab automation and emphasize the need for a shift of perspective within the drug development space. They also touch upon Dave’s vision for the future of Dash Bio, plus his advice for aspiring biotech data leaders eager to contribute to industry transformation.  Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences. Chapter Markers: [1:38] Introduction to Dave Johnson and his career journey from Moderna to founding a next-gen drug development company [2:57] Establishing mRNA technology groundwork in Moderna [4:36] The challenges of scaling up COVID-19 vaccine development [7:55] How rapid company growth impacts the organizational structure and engaging models [11:03] The role of AI, automation, and machine learning in drug development [12:45] Addressing the most significant insufficiencies in drug development and potential solutions [16:31] The need for standardization and automation in drug development [18:04] Current focus of Dash Bio on clinical bioanalysis [19:37] The misconceptions surrounding lab automation and the need for a shift of perspective within the drug development space [22:33] Dave’s vision for the future of Dash Bio and streamlining drug development [25:16] The current state of lab automation [27:41] The role of experimentation in Dash Bio's approach [29:47] Advice for aspiring data scientists and leaders in the biotech sector Useful Links Dave Johnson LinkedIn Dash Bio Website

    33 Min.
  3. 16. OKT.

    Chitrang Dave on Harnessing Real-Time Data to Transform MedTech and Healthcare

    This week, Chitrang Dave, Global Head of Enterprise Data & Analytics at Edwards Lifesciences, joins us to discuss the transformative power of real-time data, AI, and collaboration in medical device manufacturing and support.  He and host, Ross Katz, dive into how real-time data from IoT devices is reshaping quality assurance in medtech and what the future holds for medtech as big tech players like Apple and Meta enter the healthcare arena.  Together, they discuss everything from AI-powered patient identification to the integration of consumer wearables with FDA-approved medical devices. Tune in to hear how collaboration, innovation, and cutting-edge technology are improving patient outcomes and revolutionizing healthcare. Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences. Chapter Markers [01:36] Chitrang shares the experience that led him to work at leading data and analytics organizations and what work there is to be done [04:09] Chitrang highlights the role of IoT devices in medical device manufacturing, where real-time data can drive automation and improve quality assurance [06:25] What is driving innovation right now in research and development, and how companies like Apple are disrupting the medical device space  [09:23] Chitrang talks about how connectivity in devices and the expectation of the user to be able to use an intuitive interface are evolving into more real-time medical device technology  [11:47] The importance of keeping patient data private between the patient and the practitioner while using anonymized data to create solutions and identify patterns in health  [13:25] Using data to create a complete picture of the patient in order to make their life easier  [14:20] Chitrang discusses the challenge of manufacturing medical devices when there are issues with raw materials  [16:30] Chitrang discusses the potential for automation for real-time data in manufacturing [19:17] Ross and Chitrang discuss the value of having comprehensive data to personalize treatments and ensure timely responses, especially for scenarios where early detection of Alzheimer’s can save trillions of dollars [21:27] Chitrang mentions significant collaborations, such as the Cancer AI Alliance, where tech giants like AWS, Microsoft, NVIDIA, and Deloitte are working together to address critical problems in healthcare [27:10] How real-time data from medical devices could improve patient outcomes, stakeholder coordination and future trends  [28:29] Closing thoughts and where to find Chitrang Dave online  Download CorrDyn’s latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.” Find the white paper online at: https://connect.corrdyn.com/biotech-ml

    29 Min.
  4. 2. OKT.

    Automating Bioprocessing to Speed up Workflows with Invert

    This week on Data in Biotech, we’re joined by Martin Permin, the co-founder of Invert, a company that builds software that automates bioprocessing. Martin talks us through his own unique journey into biotech - starting from a role at Airbnb - through to co-founding Invert. Invert helps users grab data from their instruments, map out their individual processes, clean up the data for analysis, and look for ways to speed up the “mundane” data cleaning tasks that often take up the majority of one’s time.  With our host, Ross Katz, Martin tells us the statistical problems Invert works to solve for their different types of clients: biologic development labs, full-scale manufacturers, and CDMOs. While they all approach data cleaning and analysis from different directions, Invert can see how clients use the system and look for ways to automate repeated processes to help them save time. They discuss implementing Invert into the Design, Build, Test, Learn Loop and why Invert is invested in reducing how many times one has to go around that loop. Martin explains how his company looks to reduce the risk in tech transfer in both directions, in terms of time and labor.  Then, the conversation moves to ML/AI, where Martin tells us how a lot of his customers are finding that the bottlenecks in their processes aren’t where they thought they were, thanks to using Invert for process automation.  Finally, Martin gives us his opinions on the future trends around the corner for the biotech industry - and how Invert is preparing themselves and their customers.  Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences. Chapter Markers [1:29] Introduction to Martin and his journey into biotech [4:10] Introduction to Invert - the what and why [6:47] How Invert is implemented into a customer’s workflow [11:36] The problems Invert can solve [16:16] Design > build > test > learn… and how Invert facilitates that [20:00] CDMOs and contractors - how Invert works with their different customers [22:15] The use of ML/AI in bio-processing [33:40] Trends in Biotech that will influence Invert over the long-term

    37 Min.
  5. 18. SEPT.

    The Evolution of Genomic Analysis with Sapient

    This week on Data in Biotech, we’re joined by Mo Jain, the Founder and CEO of Sapient, a biomarker discovery organization that enables biopharma sponsors to go beyond the genome to accelerate precision drug development.  Mo talks us through his personal journey into the world of science, from school to working in academia to founding his business, Sapient. He explains how and why Sapient first started and the evolution of the high-throughput mass-spectrometry service it provides to the biopharmaceutical sector.  Together with our host Ross, they explore the technology that’s allowed scientists to explore one's medical history like never before via metabolome, lipidome, and proteome analysis. They look at how the technology developed to allow data testing to go from running twenty tests per blood sample to twenty thousand. How have Sapient built themselves up to such a renowned status in biopharmaceuticals for large-scale data projects? They discuss Sapient’s process when working with clients on genome projects. We learn about Sapient’s relationship with their clients, how they understand the targets and aims of each project, why they put so much importance on proprietary database management and quality control, and Sapient’s three pillars for high quality data discovery. Finally, Mo takes the opportunity to give us his insights on the future of biomarker discovery and mass-spectrometry technology - and how AI and Machine Learning are leading to enhanced data quality and quantity.  Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences. Chapter Markers [1:33] Introduction to Mo Jain, his journey, Genomics, and Sapient’s use of Genomics data to accelerate Medicine and Drug Development  [6:50] The types of data generated at Sapient via metabolome, lipidome & proteome, and why that data is generated [12:30] How Sapient generates this data at scale, via specialist mass-spectrometry technology  [14:48] The problems Sapient can solve for pharma and biotech companies with this data [21:03] Sapient as a service company: the questions they’re asked by pharmaceutical businesses, why they come to Sapient, and Sapient’s process for answering those questions.  [26:23] computational frameworks and data handling side of things, and how the team interact with the client [29:59] Proprietary database development and quality control  [35:27] The future of biomarker discovery and mass-spectrometry technology, and how AI and Machine Learning are leading the way at Sapient

    43 Min.
  6. 28. AUG.

    Transforming Drug Discovery through AI and Single-Cell Multiomics with Cellarity

    This week on Data in Biotech, we are joined by Parul Bordia Doshi, Chief Data Officer at Cellarity, a company that is leveraging data science to challenge traditional approaches to drug discovery.  Parul kicks off the conversation by explaining Cellarity’s mission and how it is using generative AI and single-cell multiomics to design therapies that target the entire cellular system, rather than focusing on single molecular targets. She gives insight into the functionality of Cellarity Maps, the company’s cutting-edge visualization tool that maps the progression of disease states and bridges the gap between biologists and computational scientists.  Along with host Ross Katz, Parul walks through some of the big challenges facing Chief Data Officers, particularly for biotech organizations with data-centric propositions. She emphasizes the importance of robust data frameworks for validating and standardizing complex data sets, and looks at some of the practical approaches that ensure data scientists can derive the maximum amount of value from all available data.  They discuss what data science teams look like within Cellarity, including the unique way the company incorporates human intervention into its processes. Parul also emphasizes the benefits that come through hiring multilingual, multidisciplinary teams and putting a strong focus on collaboration.  Finally, we get Parul’s take on the future of data science for drug discovery, plus a look at Cellarity’s ongoing collaboration with Novo Nordisk on the development of novel therapeutics.  Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences. Chapter Markers [1:45] Introduction to Parul, her career journey, and Cellarity’s approach to drug discovery. [5:47] The life cycle of data at Cellarity from collection to how it is used by the organization.  [7:45] How the Cellarity Maps visualization tool is used to show the progression of disease states [9:05] The role of a Chief Data Officer in aligning an organization’s data strategy with its company mission.   [11:46] The benefits of collaboration and multidisciplinary, cross-functional teams to drive innovation.  [14:53] Cellarity's end-to-end discovery process; including how it uses generative AI, contrastive learning techniques, and visualization tools.  [19:42] The role of humans vs the role of machines in scientific processes.  [23:05] Developing and validating models, including goal setting, benchmarking, and the need for collaboration between data teams and ML scientists. [30:58] Generating and managing massive amounts of data, ensuring quality, and maximizing the value extracted. [37:08] The future of data science for drug discovery, including Cellarity’s collaboration with Novo Nordisk to discover and develop a novel treatment for MASH.

    40 Min.
  7. 14. AUG.

    Using Generative AI to Design New Therapeutic Proteins with Evozyne

    This week on Data in Biotech, Ryan Mork, Director of Data Science at Evozyne, joins host Ross Katz to discuss how data science and machine learning are being used in protein engineering and drug discovery. Ryan explains how Evozyne is utilizing large language models (LLMs) and generative AI (GenAI) to design new biomolecules, training the models with huge volumes of protein and biology data. He walks through the organization’s evolution-based design approach and how it leverages the evolutionary history of protein families. Ross and Ryan dig into the different models being used by Evozyne, including latent variable models and embeddings. They also discuss some of the challenges around testing the functionality of models and the approaches that can be used for evaluation. Alongside the deep dive into data and modeling topics, Ryan also discusses the importance of relationships between the wet lab and data science teams. He emphasizes the need for mutual understanding of each role to ensure the entire organization pulls together towards the same goals. Finally, Ross asks Ryan to opine on the future of GenAI and LLMs for biotechnology and how this area will develop over the next five years. He also finds out more about the R&D roadmap at Evozyne and its plans to play a part in moving GenAI for protein engineering forward. Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences. Chapter Markers [1:24] Introduction to Ryan, his career to date, and the focus of Evozyne. [2:59] How the Evozyne data science team operates and the data sources it utilizes. [4:22] Building models to develop synthetic proteins for therapeutic uses. [9:10] Deciding which proteins to take into the lab for experimental validation. [10:49] Taking an evolution-based design approach to protein engineering. [14:34] Using latent variable models and embeddings to capture evolutionary relationships. [18:01] Evaluating the functionality of generative models and the role of auxiliary models. [24:24] The value of tight coupling and mutual understanding between wet lab and data science teams. [28:07] Evozyne’s approach to developing and testing new data science tools, models, and technologies. [31:35] Predictions for future developments in Generative AI for biotechnology. [33:41] Evozyne’s goal to increase throughput and its planned approach. [39:09] Where to connect with Ryan and keep up to date with news from Evozyne.

    40 Min.
  8. 31. JULI

    Balancing Software-Driven Processes and Human Curation to Unlock Genomics Intelligence with Genomenon

    This week on Data in Biotech, Ross is joined by Jonathan Eads, VP of Engineering at genomics intelligence company Genomenon, to discuss how his work supports the company’s mission to make genomic evidence actionable. Jonathan explains his current role leading the teams focused on clinical engineering, curation engineering, platform development and overseeing Genomenon’s data science and AI efforts. He gives insight into how Genomenon’s software works to scan genomics literature and index genetic variants, providing critical evidence-based guidance for those working across biotech, pharmaceutical, and medical disciplines. Jonathan outlines the issues with inconsistent genetic data, variant nomenclature and extracting genetic variants from unstructured text, before explaining how human curators are essential to ensure accuracy of output. Jonathan and Ross discuss the opportunities and limitations that come with using AI and natural language processing (NLP) techniques for genetic variant analysis. Jonathan lays out the process of developing robust validation datasets and fine-tuning AI models to handle issues like syntax anomalies and outlines the need to balance the short-term need for data quality with the long-term goal of advancing the platform’s AI and automation capabilities. We hear notable success stories of how Genomenon’s platform is being used to accelerate variant interpretation, disease diagnosis, and precision medicine development. Finally, Ross gets Jonathan’s take on the future of genomics intelligence, including the potential of end-to-end linkage of information from variants all the way out to patient populations. Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences. Chapter Markers [1:50] Introduction to Jonathan and his academic and career background. [5:14] What Genomenon’s mission to ‘make genomic evidence actionable’ looks like in practice. [14:48] The limitations of how scientists and doctors have historically been able to use literature to understand genetic variants. [16:08] Challenges with nomenclature and indexing and how this impacts on access to information.  [18:11] Extracting genetic variants from scientific publications into a structured, searchable index. [22:04] Using a combination of software processes and human curation for accurate research outputs. [24:57] Building high functionality, complex, and accurate software processes to analyze genomic literature. [29:45] Dealing with the challenges of AI and the role of human curators for the accuracy of genetic variant classification.   [34:37] Managing the trade-off between short-term needs for improved data and long-term goals for automation and AI development. [38:39] Success stories using the Genomenon platform including making an FDA case and diagnosing rare disease.  [41:55] Predictions for future advancements in literature search for genetic variant analysis. [43:21] The potential impact of Genomenon’s acquisition of Jack's Clinical Knowledge Base.

    41 Min.

Info

Data in Biotech is a fortnightly podcast exploring how companies leverage data to drive innovation in life sciences.  Every two weeks, Ross Katz, Principal and Data Science Lead at CorrDyn, sits down with an expert from the world of biotechnology to understand how they use data science to solve technical challenges, streamline operations, and further innovation in their business.  You can learn more about CorrDyn - an enterprise data specialist that enables excellent companies to make smarter strategic decisions - at www.corrdyn.com

Das gefällt dir vielleicht auch

Melde dich an, um anstößige Folgen anzuhören.

Bleib auf dem Laufenden mit dieser Sendung

Melde dich an oder registriere dich, um Sendungen zu folgen, Folgen zu sichern und die neusten Updates zu erhalten.

Wähle ein Land oder eine Region aus

Afrika, Naher Osten und Indien

Asien/Pazifik

Europa

Lateinamerika und Karibik

USA und Kanada