19 episodes

Data in Biotech is a fortnightly podcast exploring how companies leverage data to drive innovation in life sciences. 


Every two weeks, Ross Katz, Principal and Data Science Lead at CorrDyn, sits down with an expert from the world of biotechnology to understand how they use data science to solve technical challenges, streamline operations, and further innovation in their business. 


You can learn more about CorrDyn - an enterprise data specialist that enables excellent companies to make smarter strategic decisions - at www.corrdyn.com

Data in Biotech CorrDyn

    • Science

Data in Biotech is a fortnightly podcast exploring how companies leverage data to drive innovation in life sciences. 


Every two weeks, Ross Katz, Principal and Data Science Lead at CorrDyn, sits down with an expert from the world of biotechnology to understand how they use data science to solve technical challenges, streamline operations, and further innovation in their business. 


You can learn more about CorrDyn - an enterprise data specialist that enables excellent companies to make smarter strategic decisions - at www.corrdyn.com

    Improve Quality and Minimize Variability in Biotech Manufacturing with Stewart Fossceco

    Improve Quality and Minimize Variability in Biotech Manufacturing with Stewart Fossceco

    This week, we are pleased to have Stewart Fossceco, Head of Non-Clinical and Diagnostics Statistics at Zoetis and an expert in pharmaceutical manufacturing, join us on the Data in Biotech podcast.


    We sat down with Stewart to discuss implementing and improving Quality Assurance (QA) processes at every stage of biotech manufacturing, from optimizing assay design and minimizing variability in early drug development to scaling this up when moving to full production. Stewart talks from his experiences on the importance of experimental design, understanding variability data to inform business decisions, and the pitfalls of over measuring. 


    Along with host Ross Katz, Stewart discusses the value of statistical simulations in mapping out processes, identifying sources of variability, and what this looks like in practice. They also explore the importance of drug stability modeling and how to approach it to ensure product quality beyond the manufacturing process.


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences.


    ---


    Chapter Markers


    [1:39] Stewart starts by giving an overview of his career in biotech manufacturing.


    [3:54] Stewart talks about optimizing processes to control product quality in the early stages of the drug development process.


    [7:27] Ross asks Stewart to speak more about how to optimize and minimize the variability of assays to increase confidence in clinical results.


    [12:11] Stewart explains the importance of understanding how assay variability influences results and how to handle this when making business decisions.


    [14:13] Ross and Stewart discuss the issue of assay variability in relation to regulatory scrutiny.


    [17:07] Stewart walks through the benefits of using statistical simulation tools to better understand how an assay performs.


    [19:49] Stewart highlights the importance of understanding at which stage sampling has the greatest impact on decreasing variability


    [22:09] Stewart answers the question of how monitoring processes change when moving to full production scale.


    [26:39] Stewart outlines stability modeling and the importance of stability programs in biotech manufacturing.


    [30:38] Stewart shares his views on the biggest challenges that biotech manufacturers face around data.


    ---


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 38 min
    Delivering on the Promise of Electronic Lab Notebooks with SciNote

    Delivering on the Promise of Electronic Lab Notebooks with SciNote

    This week, we are pleased to welcome to the Data in Biotech podcast Brendan McCorkle, CEO of SciNote, a cloud-based ELN (Electronic Lab Notebook) with lab inventory, compliance, and team management tools.


    In this episode, we discuss how the priorities of ‘Research’ and ‘Development’ differ when it comes to the data they expect and how they use it, and how ELNs can work to support both functions by balancing structure and flexibility. We explore the challenges of developing an ELN that serves the needs and workflows of all stakeholders, making the wider business case for ELNs, and why, in the lab, paper and Excel need to be a thing of the past.


    Brendan is upfront about the data challenges faced by biotechs, which do not have one-vendor solutions. He emphasizes the importance of industry collaboration and software vendors’ role in following the principles of FAIR data. We also get his take on the future of ELNs and how they can leverage AI and ML.


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences.


    Chapter Markers


    [1:40] Brendan gives a whistlestop tour of his career and the path to setting up SciNote.


    [4:20] Brendan discusses the principles of FAIR data and the challenges of adhering to them in the biotech industry.


    [6:15] Brendan talks about the need to balance flexibility and structure when collecting R&D data.


    [13:34] Brendan highlights the challenge of catering to diverse workflows, even within the same company.


    [16:05] Brendan emphasizes the importance of metadata and how vendors, like SciNote, can help collect it with flexible tools for data entry and post-processing.


    [18:59] Ross and Brendan discuss how to create an ELN that serves all stakeholders within the organization without imposing creativity constraints on research scientists.


    [21:57] Brendan highlights how benefits like improving loss reduction and efficiency form part of the business case for a tool like SciNote.


    [24:25] Brendan shares real-world examples of how companies integrate SciNote into their organizations and the need to work with other systems and software.


    [34:01] Ross asks for his advice to biotech companies considering implementing ELNs, particularly into their workflows.


    [39:10] Brendan gives his take on incorporating ML and AI within SciNote.


    ---


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 43 min
    Developing Future Sustainable Materials Using AI with Cambrium

    Developing Future Sustainable Materials Using AI with Cambrium

    This week, we are pleased to be joined on the Data in Biotech Podcast by Pierre Salvy, who recently became the CTO at Cambrium, and his colleague Lucile Bonnin, Head of Research & Development at Cambrium. 


    As part of the Cambrium team behind NovaColl™, the first micro-molecular and skin-identical vegan collagen to market, Pierre and Lucile share their practical experiences of using AI to support protein design.


    We ask why Cambrium, as a molecular design organization, decided to focus on the cosmetics industry and dig into the factors that have driven its success. From developing a protein programming language to the challenges of collecting and utilizing lab data, Pierre and Lucile give a detailed look under the hood of a company using data and AI to accelerate its journey from start-up to scale-up.


    They also talk to host Ross Katz about the benefits of working as a cloud-native company from day zero, de-risking the process of scaling, and opportunities for new biomaterials.   


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences. 


    Chapter Markers


    [1:34] Pierre and Lucile make a quick introduction and give an overview of Cambrium’s work using AI to design proteins with the aim of developing sustainable materials.


    [4:00] Lucile introduces NovaColl™, and Pierre elaborates on the process of bringing Cambrium’s first product to market.


    [7:37] Ross asks Pierre and Lucile to give an overview of the considerations and challenges of protein design.


    [11:01] Pierre and Lucile explain how Cambrium works with potential customers to design specific proteins that meet or exceed their expectations.


    [12:49] Ross and Pierre discuss how Cambrium approached developing the data systems it needed to explore the protein landscape and how the team optimized the lab set-up.


    [18:04] Pierre discusses the protein programming language developed at Cambrium.


    [21:24] Lucile and Pierre talk through the development of the data platform at Cambrium as the company has scaled and the value of being cloud-native.


    [24:12] Lucile and Pierre discuss how they approached designing the manufacturing process from scratch and how to reduce risk at every stage, especially while scaling up.  


    [31:44] The conversation moves to look at how Cambrium will use the processes and data platform developed with NovaColl™ to explore opportunities for the development of new biomaterials. 


    [34:42] Pierre gives advice on how start-ups can be smarter when selecting an area of focus.


    [36:27] Lucile emphasizes the importance of getting cross-organizational buy-in to ensure successful data capture. 


    [39:01] Pierre and Lucile recommend resources that may be of interest to listeners seeking more information on the topics covered. 


    ---


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 39 min
    Building Strong Data Foundations for Biotech Startups with Jacob Oppenheim

    Building Strong Data Foundations for Biotech Startups with Jacob Oppenheim

    This week, we are pleased to welcome Jacob Oppenheim, Entrepreneur in Residence at Digitalis Ventures, a venture capital firm that invests in solutions to complex problems in human and animal health.


    Jacob sat down with Ross to discuss the importance of establishing strong data foundations in biotech companies and how to approach the task. We explore the challenges biotech organisations face with existing tools. What are the limitations, and why are current data tools and systems not yet geared toward helping scientists themselves extract meaningful insights from the data?


    We also get Jacob’s take on AI in the biotech space and what is needed for it to reach its full potential, plus some of the opportunities new modelling capabilities will allow scientists to explore.


    Finally, we looked at the topic of building a team, how to approach this within a start-up, and the role consultancies play in providing expertise and guidance to early-stage biotech companies.


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in the life sciences.


    --


    Chapter Markers:


    [1:08] Jacob gives a quick overview of his career to date and explains how he landed in his current role at Digitalis Venture and what differentiates it as a venture fund.


    [07:42] Ross asks Jacob about the biggest challenges and opportunities facing data scientists, data teams, and start-ups more broadly.


    [9:56] Jacob talks about the limitations of existing data management tools within biotech companies. 


    [13:55] Jacob discusses what is needed as a foundation for AI tools to reach their potential.  


    [17:12] Jacob argues for the need for a unified data ecosystem and the benefits of a modular approach to tooling.


    [23:42] Jacob explains that biology has become more engineering-focused and how this allows data to guide drug development.


    [26:14] Ross and Jacob discuss the challenges of integrating data science and biotech teams, including cultural clashes and tooling conflicts. 


    [32:52] Jacob emphasises the importance of consultancies in the biotech space, particularly for start-ups.


    [36:21] Ross asks what the new modelling capabilities are that he is most excited about and how they will drive the industry forward.  


    [38:45] Jacob shares his advice for scientists and entrepreneurs looking to start a biotech venture and recommends resources.


    --


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 42 min
    How OmicSoft Is Facilitating In-Depth Exploration Among NGS Datasets

    How OmicSoft Is Facilitating In-Depth Exploration Among NGS Datasets

    This week's guest is Joseph Pearson, Global Product Manager of OmicSoft at QIAGEN, a global provider of sample-to-insight solutions that enable customers to gain valuable molecular insights.  


    During this episode, we dive into OmicSoft, a powerful NGS analysis suite that can quickly explore and compare 500,000 curated omics samples from disease-related studies. Joseph outlines the challenges of acquiring and analysing NGS data sets, how customers can interact with OmicSoft data, and what he thinks of the build versus buy debate when selecting new bioinformatics tools.


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in life sciences.


    Chapter Markers:


    [01:33] Joseph gives us a brief introduction to his career and how he got to the position that he has today. 


    [03:39] Ross asks Joseph about QIAGEN and how OmicSoft complements the existing range of products the company already provides. 


    [05:09] Joseph talks about the work that is going into their NGS datasets and how the company is extracting value from those datasets. 


    [06:09] Ross asks Joseph about the types of customers that use this solution. 


    [13:06] Joseph clarifies where the data underlying OmicSoft comes from.


    [19:29] Ross asks Joseph how the company approaches educating the customer.


    [22:44] Joseph explains the decision-making process that companies go through when deciding to either build or buy.


    [27:15] Ross asks Joseph about the biggest challenges or criticisms people have about the platform. 


    [31:07] Joseph explains how his biology background has shaped his view of the challenges he faces in his role in product management.


    [34:11] Joseph tells us where we can find out more about OmicSoft and QIAGEN.


    ---


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 34 min
    How Bayesian Optimization is Helping to Accelerate Innovation at Merck Group

    How Bayesian Optimization is Helping to Accelerate Innovation at Merck Group

    This week's guest is Wolfgang Halter, Head of Data Science and Bioinformatics at Merck Life Science, a leading global science and technology company. 


    Ross sat down with Wolfgang to discuss the work on the BayBE project, an open-source library built for Bayesian optimization. Throughout the episode, we go on to learn how BayBE is used for both experimental design and as a means to accelerate innovation. The pair also discusses the benefits and challenges of Bayesian optimization and the need for standardised data models. Finally, Wolfgang shares some advice for those scientists and engineers who are keen to get ahead in the industry. 


    You can access the GitHub repo mentioned in the episode by clicking here: github.com/emdgroup/BayBE


    Data in Biotech is a fortnightly podcast exploring how companies leverage data innovation in life sciences.


    --


    Chapter Markers:


    [1:32] Wolfgang gives us a whistle-stop tour of his career to date and explains the motivation behind pursuing a career in Data Science. 


    [2:35] Ross asks Wolfgang about Merck’s mission and the role the data science team is playing in helping the company achieve that mission. 


    [5:28] Wolfgang explains the work that is going into the BayBE project. 


    [13:23] Ross asks Wolfgang how Merck arranged their experimental campaigns in BayBE and how they garnered insights during the process. 


    [17:45] Wolfgang explains why the team developed BayBE as an open-source library.


    [19:25] Wolfgang shares some more details on how the data science team at Merck is using BayBE today.


    [20:42] Wolfgang shares some examples of the kinds of applications that the team is currently developing. 


    [21:54] Wolfgang provides us with information about the amount of time that is saved on average as a result of adopting this approach. 


    [34:38] Ross asks Wolfgang how his engineering background informs his perspective on the problems facing biotech and R&D. 


    [36:57] Wolfgang gives us his advice for young scientists and engineers who are looking to learn more about biotech. 


    [38:24] Wolfgang provides us with a list of resources for those who want to find out more about Merck and the BayBE project. 


    --


    Download our latest white paper on “Using Machine Learning to Implement Mid-Manufacture Quality Control in the Biotech Sector.”


    Visit this link: https://connect.corrdyn.com/biotech-ml

    • 39 min

Top Podcasts In Science

24 spørgsmål til professoren
Weekendavisen
Videnskab fra vilde hjerner
Niels Bohr Institutet · Københavns Universitet
Periodisk
RAKKERPAK
KRANIEBRUD
Radio4
Hva så?! forklarer alt
Christian Fuhlendorff
Plantejagten
Plantejagten

You Might Also Like

Hard Fork
The New York Times
Freakonomics Radio
Freakonomics Radio + Stitcher
Odd Lots
Bloomberg
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
The Ezra Klein Show
New York Times Opinion