21 episodes

Data in Biotech is a fortnightly podcast exploring how companies leverage data to drive innovation in life sciences. 


Every two weeks, Ross Katz, Principal and Data Science Lead at CorrDyn, sits down with an expert from the world of biotechnology to understand how they use data science to solve technical challenges, streamline operations, and further innovation in their business. 


You can learn more about CorrDyn - an enterprise data specialist that enables excellent companies to make smarter strategic decisions - at www.corrdyn.com

Data in Biotech CorrDyn

    • Science

Data in Biotech is a fortnightly podcast exploring how companies leverage data to drive innovation in life sciences. 


Every two weeks, Ross Katz, Principal and Data Science Lead at CorrDyn, sits down with an expert from the world of biotechnology to understand how they use data science to solve technical challenges, streamline operations, and further innovation in their business. 


You can learn more about CorrDyn - an enterprise data specialist that enables excellent companies to make smarter strategic decisions - at www.corrdyn.com

    Solving Data Integration Challenges in Life Sciences with Ganymede

    Solving Data Integration Challenges in Life Sciences with Ganymede

    This week, Nathan Clark, CEO at Ganymede, joins the Data in Biotech podcast to discuss the challenges of integrating lab instruments and data in the biotech industry and how Ganymede’s developer platform is helping to automate data integration and metadata management across Life Sciences.


    Nathan sits down with Data in Biotech host Ross Katz to discuss the multiple factors that add to the complexity of handling lab data, from the evolutionary nature of biology to the lab instruments being used. Nathan explains the importance of collecting metadata as unique identifiers that are essential to facilitating automation and data workflows.


    As the founder of Ganymede, Nathan outlines the fundamentals of the developer platform and how it has been built to deal practically with the data, workflow, and automation challenges unique to life sciences organizations. He explains the need for code to allow organizations to contextualize and consume data and how the platform is built to enable flexible last-mile integration. He also emphasizes Ganymede's vision to create tools at varying levels of the stack to glue together systems in whatever way is optimal for its specific ecosystem.


    As well as giving an in-depth overview of how the Ganymede platform works, he also digs into some of the key challenges facing life sciences organizations as they undergo digital transformation journeys.


    The need to engage with metadata from the outset to avoid issues down the line, how to rid organizations of secret Excel files and improve data collection, and the regulatory risks that come with poor metadata handling are all covered in this week’s episode.  


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences.


    Chapter Markers


    [1:28] Nathan gives a quick overview of his background and the path that led him to launch Ganymede.


    [5:43] Nathan gives us his perspective on where the complexity of life sciences data comes from.


    [8:23] Nathan explains the importance of using code to cope with the high levels of complexity and how the Ganymede developer platform facilitates this.


    [11:26] Nathan summarizes the three layers in the Ganymede platform: the ‘core platform’, ‘connectors’ or templates, and ‘transforms’, which allow data to be utilized.


    [13:18] Nathan highlights the importance of associating lab data with a unique ID to facilitate data entry and automation.


    [15:05] Nathan outlines why the drawbacks of manual data association are inefficient, unreliable, and difficult to maintain.


    [16:43] Nathan explains what using Ganymede to manage data and metadata looks like from inside a company.


    [24:50] Ross asks Nathan to describe how Ganymede assists with workflow automation and how it can overcome organization-specific challenges.


    [27:42] Nathan highlights the challenges businesses are looking to solve when they turn to a solution, like Ganymede, pointing to three common scenarios.


    [34:32] Nathan emphasizes the importance of laying the groundwork for a data future at an early stage.


    [37:49] Nathan and Ross stress the need for a digital transformation roadmap, with smaller initiatives on the way demonstrating value in their own right.


    [40:35] Nathan talks about the future for Ganymede and what is on the horizon for the company and their customers.


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 44 min
    The Power of Open-Source Pipelines for Scientific Research with Harshil Patel

    The Power of Open-Source Pipelines for Scientific Research with Harshil Patel

    This week, Harshil Patel, Director of Scientific Development at Seqera, joins the Data in Biotech podcast to discuss the importance of collaborative, open-source projects in scientific research and how they support the need for reproducibility.


    Harshil lifts the lid on how Nextflow has become a leading open-source workflow management tool for scientists and the benefits of using an open-source model. He talks in detail about the development of Nextflow and the wider Seqera ecosystem, the vision behind it, and the advantages and challenges of this approach to tooling.


    He discusses how the nf-core community collaboratively develops and maintains over 100 pipelines using Nextflow and how the decision to constrain pipelines to one per analysis type promotes collaboration and consistency and avoids turning pipelines into the “wild west.”


    We also look more practically at Nextflow adoption as Harshil delves into some of the challenges and how to overcome them.


    He explores the wider Seqera ecosystem and how it helps users manage pipelines, analysis, and cloud infrastructure more efficiently, and he looks ahead to the future evolution of scientific research. 


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences.


    ---


    Chapter Markers


    [1:23] Harshil shares a quick overview of his background in bioinformatics and his route to joining Seqera.


    [3:37] Harshil gives an introduction to Nextflow, including its origins, development, and the benefits of using the platform for scientists.


    [9:50] Harshil expands on some of the off-the-shelf process pipelines available through NFcore and how this is continuing to expand beyond genomics.


    [12:08] Harshil explains NFcore’s open-source model, the advantages of constraining pipelines to one analysis per type, and how the Nextflow community works.


    [17:43] Harshil talks about Nextflow's custom DSL and the advantages it offers users


    [20:23] Harshil explains how Nextflow fits into the broader Seqera ecosystem. 


    [26:08] Ross asks Harshil about overcoming some of the challenges that arise with parallelization and optimizing pipelines


    [28:01] Harshil talks about the features of Wave, Seqera’s containerization solution. 


    [32:16] Ross asks Harshil to share some of the most complex and impressive things he has seen done within the Seqera ecosystem.


    [35:42] Harshil gives his take on how he sees the future of biotech genomics research evolution.


    ---


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 41 min
    Improve Quality and Minimize Variability in Biotech Manufacturing with Stewart Fossceco

    Improve Quality and Minimize Variability in Biotech Manufacturing with Stewart Fossceco

    This week, we are pleased to have Stewart Fossceco, Head of Non-Clinical and Diagnostics Statistics at Zoetis and an expert in pharmaceutical manufacturing, join us on the Data in Biotech podcast.


    We sat down with Stewart to discuss implementing and improving Quality Assurance (QA) processes at every stage of biotech manufacturing, from optimizing assay design and minimizing variability in early drug development to scaling this up when moving to full production. Stewart talks from his experiences on the importance of experimental design, understanding variability data to inform business decisions, and the pitfalls of over measuring. 


    Along with host Ross Katz, Stewart discusses the value of statistical simulations in mapping out processes, identifying sources of variability, and what this looks like in practice. They also explore the importance of drug stability modeling and how to approach it to ensure product quality beyond the manufacturing process.


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences.


    ---


    Chapter Markers


    [1:39] Stewart starts by giving an overview of his career in biotech manufacturing.


    [3:54] Stewart talks about optimizing processes to control product quality in the early stages of the drug development process.


    [7:27] Ross asks Stewart to speak more about how to optimize and minimize the variability of assays to increase confidence in clinical results.


    [12:11] Stewart explains the importance of understanding how assay variability influences results and how to handle this when making business decisions.


    [14:13] Ross and Stewart discuss the issue of assay variability in relation to regulatory scrutiny.


    [17:07] Stewart walks through the benefits of using statistical simulation tools to better understand how an assay performs.


    [19:49] Stewart highlights the importance of understanding at which stage sampling has the greatest impact on decreasing variability


    [22:09] Stewart answers the question of how monitoring processes change when moving to full production scale.


    [26:39] Stewart outlines stability modeling and the importance of stability programs in biotech manufacturing.


    [30:38] Stewart shares his views on the biggest challenges that biotech manufacturers face around data.


    ---


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 38 min
    Delivering on the Promise of Electronic Lab Notebooks with SciNote

    Delivering on the Promise of Electronic Lab Notebooks with SciNote

    This week, we are pleased to welcome to the Data in Biotech podcast Brendan McCorkle, CEO of SciNote, a cloud-based ELN (Electronic Lab Notebook) with lab inventory, compliance, and team management tools.


    In this episode, we discuss how the priorities of ‘Research’ and ‘Development’ differ when it comes to the data they expect and how they use it, and how ELNs can work to support both functions by balancing structure and flexibility. We explore the challenges of developing an ELN that serves the needs and workflows of all stakeholders, making the wider business case for ELNs, and why, in the lab, paper and Excel need to be a thing of the past.


    Brendan is upfront about the data challenges faced by biotechs, which do not have one-vendor solutions. He emphasizes the importance of industry collaboration and software vendors’ role in following the principles of FAIR data. We also get his take on the future of ELNs and how they can leverage AI and ML.


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences.


    Chapter Markers


    [1:40] Brendan gives a whistlestop tour of his career and the path to setting up SciNote.


    [4:20] Brendan discusses the principles of FAIR data and the challenges of adhering to them in the biotech industry.


    [6:15] Brendan talks about the need to balance flexibility and structure when collecting R&D data.


    [13:34] Brendan highlights the challenge of catering to diverse workflows, even within the same company.


    [16:05] Brendan emphasizes the importance of metadata and how vendors, like SciNote, can help collect it with flexible tools for data entry and post-processing.


    [18:59] Ross and Brendan discuss how to create an ELN that serves all stakeholders within the organization without imposing creativity constraints on research scientists.


    [21:57] Brendan highlights how benefits like improving loss reduction and efficiency form part of the business case for a tool like SciNote.


    [24:25] Brendan shares real-world examples of how companies integrate SciNote into their organizations and the need to work with other systems and software.


    [34:01] Ross asks for his advice to biotech companies considering implementing ELNs, particularly into their workflows.


    [39:10] Brendan gives his take on incorporating ML and AI within SciNote.


    ---


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 43 min
    Developing Future Sustainable Materials Using AI with Cambrium

    Developing Future Sustainable Materials Using AI with Cambrium

    This week, we are pleased to be joined on the Data in Biotech Podcast by Pierre Salvy, who recently became the CTO at Cambrium, and his colleague Lucile Bonnin, Head of Research & Development at Cambrium. 


    As part of the Cambrium team behind NovaColl™, the first micro-molecular and skin-identical vegan collagen to market, Pierre and Lucile share their practical experiences of using AI to support protein design.


    We ask why Cambrium, as a molecular design organization, decided to focus on the cosmetics industry and dig into the factors that have driven its success. From developing a protein programming language to the challenges of collecting and utilizing lab data, Pierre and Lucile give a detailed look under the hood of a company using data and AI to accelerate its journey from start-up to scale-up.


    They also talk to host Ross Katz about the benefits of working as a cloud-native company from day zero, de-risking the process of scaling, and opportunities for new biomaterials.   


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences. 


    Chapter Markers


    [1:34] Pierre and Lucile make a quick introduction and give an overview of Cambrium’s work using AI to design proteins with the aim of developing sustainable materials.


    [4:00] Lucile introduces NovaColl™, and Pierre elaborates on the process of bringing Cambrium’s first product to market.


    [7:37] Ross asks Pierre and Lucile to give an overview of the considerations and challenges of protein design.


    [11:01] Pierre and Lucile explain how Cambrium works with potential customers to design specific proteins that meet or exceed their expectations.


    [12:49] Ross and Pierre discuss how Cambrium approached developing the data systems it needed to explore the protein landscape and how the team optimized the lab set-up.


    [18:04] Pierre discusses the protein programming language developed at Cambrium.


    [21:24] Lucile and Pierre talk through the development of the data platform at Cambrium as the company has scaled and the value of being cloud-native.


    [24:12] Lucile and Pierre discuss how they approached designing the manufacturing process from scratch and how to reduce risk at every stage, especially while scaling up.  


    [31:44] The conversation moves to look at how Cambrium will use the processes and data platform developed with NovaColl™ to explore opportunities for the development of new biomaterials. 


    [34:42] Pierre gives advice on how start-ups can be smarter when selecting an area of focus.


    [36:27] Lucile emphasizes the importance of getting cross-organizational buy-in to ensure successful data capture. 


    [39:01] Pierre and Lucile recommend resources that may be of interest to listeners seeking more information on the topics covered. 


    ---


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 39 min
    Building Strong Data Foundations for Biotech Startups with Jacob Oppenheim

    Building Strong Data Foundations for Biotech Startups with Jacob Oppenheim

    This week, we are pleased to welcome Jacob Oppenheim, Entrepreneur in Residence at Digitalis Ventures, a venture capital firm that invests in solutions to complex problems in human and animal health.


    Jacob sat down with Ross to discuss the importance of establishing strong data foundations in biotech companies and how to approach the task. We explore the challenges biotech organisations face with existing tools. What are the limitations, and why are current data tools and systems not yet geared toward helping scientists themselves extract meaningful insights from the data?


    We also get Jacob’s take on AI in the biotech space and what is needed for it to reach its full potential, plus some of the opportunities new modelling capabilities will allow scientists to explore.


    Finally, we looked at the topic of building a team, how to approach this within a start-up, and the role consultancies play in providing expertise and guidance to early-stage biotech companies.


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences.


    --


    Chapter Markers:


    [1:08] Jacob gives a quick overview of his career to date and explains how he landed in his current role at Digitalis Venture and what differentiates it as a venture fund.


    [07:42] Ross asks Jacob about the biggest challenges and opportunities facing data scientists, data teams, and start-ups more broadly.


    [9:56] Jacob talks about the limitations of existing data management tools within biotech companies. 


    [13:55] Jacob discusses what is needed as a foundation for AI tools to reach their potential.  


    [17:12] Jacob argues for the need for a unified data ecosystem and the benefits of a modular approach to tooling.


    [23:42] Jacob explains that biology has become more engineering-focused and how this allows data to guide drug development.


    [26:14] Ross and Jacob discuss the challenges of integrating data science and biotech teams, including cultural clashes and tooling conflicts. 


    [32:52] Jacob emphasises the importance of consultancies in the biotech space, particularly for start-ups.


    [36:21] Ross asks what the new modelling capabilities are that he is most excited about and how they will drive the industry forward.  


    [38:45] Jacob shares his advice for scientists and entrepreneurs looking to start a biotech venture and recommends resources.


    --


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 42 min

Top Podcasts In Science

TOP TO TOE
THE STANDARD
Imperial College Podcast
Imperial College London
Weirdทยาศาสตร์
Salmon Podcast
WiTcast
WiTcast
NASA's Curious Universe
National Aeronautics and Space Administration (NASA)
The Why File งานวิจัยใช่ว่าไม่สนุก
Salmon Podcast

You Might Also Like

Business Of Biotech
Matt Pillar
The Long Run with Luke Timmerman
Timmerman Report
The Readout Loud
STAT
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Nature Podcast
Springer Nature Limited
The Prof G Pod with Scott Galloway
Vox Media Podcast Network