16 min

Ep 32: Best Practices - Pipeline Development, Part Two the bioinformatics lab

    • Science

PHA4GE Ten Best Practices for Public Health Bioinformatics Pipelines:
https://github.com/pha4ge/public-health-pipeline-best-practices/blob/main/docs/pipeline-best-practices.md

Summary

In this episode, Kevin Libuit and Andrew Page discuss the 10 best practices for public health pipeline development. They start by emphasizing the use of common file formats and the importance of avoiding reinventing the wheel. They highlight the benefits of standard file formats and the availability of parsers for different languages. They also discuss the implementation of software testing, including the use of automated testing and the integration of testing with Docker containers. They emphasize the need for accessibility to benchmark or validation data sets and the importance of reference data requirements. They also touch on the significance of hiring bioinformaticians and the documentation practices that should be followed.

Takeaways

Use common file formats to avoid reinventing the wheel and enable compatibility with other programs.
Implement software testing, including automated testing, to ensure functionality and identify bugs.
Provide benchmark or validation data sets to allow users to compare and evaluate the performance of the pipeline.
Consider the reference data requirements and ensure accessibility to curated databases.
Hire bioinformaticians with domain expertise to navigate the complexities of pipeline development.
Follow documentation practices, including communication of authorship, pipeline maintenance statements, and community guidelines for contribution and support.

PHA4GE Ten Best Practices for Public Health Bioinformatics Pipelines:
https://github.com/pha4ge/public-health-pipeline-best-practices/blob/main/docs/pipeline-best-practices.md

Summary

In this episode, Kevin Libuit and Andrew Page discuss the 10 best practices for public health pipeline development. They start by emphasizing the use of common file formats and the importance of avoiding reinventing the wheel. They highlight the benefits of standard file formats and the availability of parsers for different languages. They also discuss the implementation of software testing, including the use of automated testing and the integration of testing with Docker containers. They emphasize the need for accessibility to benchmark or validation data sets and the importance of reference data requirements. They also touch on the significance of hiring bioinformaticians and the documentation practices that should be followed.

Takeaways

Use common file formats to avoid reinventing the wheel and enable compatibility with other programs.
Implement software testing, including automated testing, to ensure functionality and identify bugs.
Provide benchmark or validation data sets to allow users to compare and evaluate the performance of the pipeline.
Consider the reference data requirements and ensure accessibility to curated databases.
Hire bioinformaticians with domain expertise to navigate the complexities of pipeline development.
Follow documentation practices, including communication of authorship, pipeline maintenance statements, and community guidelines for contribution and support.

16 min

Top Podcasts In Science

Making Sense with Sam Harris
Sam Harris
19 Keys Presents High Level Conversations
EYL Network
Radiolab
WNYC Studios
Hidden Brain
Hidden Brain, Shankar Vedantam
You Are Not So Smart
You Are Not So Smart
BBC Inside Science
BBC Radio 4