86 episodes

This CHAOSS Community podcast features members who spent considerable time and effort to understand open source community health and how we can measure it through metrics, analytics, and software. We invite guests to this podcast to talk about how they use open source community health metrics and software in their own open source communities, companies, or foundations. This podcast fills the gap with open source community metric definitions and software on one side and their use on the other side.

CHAOSScast CHAOSS Project

    • Technology

This CHAOSS Community podcast features members who spent considerable time and effort to understand open source community health and how we can measure it through metrics, analytics, and software. We invite guests to this podcast to talk about how they use open source community health metrics and software in their own open source communities, companies, or foundations. This podcast fills the gap with open source community metric definitions and software on one side and their use on the other side.

    Episode 86: The Turing Institute: Using AI ethically with the power of Open Source

    Episode 86: The Turing Institute: Using AI ethically with the power of Open Source

    Thank you to the folks at Sustain for providing the hosting account for CHAOSSCast!


    CHAOSScast – Episode 86


    In this episode of CHAOSScast, co-hosts Alice Sowerby and Dawn Foster welcome guests Aida Mehonic, Malvika Sharan, and Kirstie Whittaker from The Alan Turing Institute. The discussion begins with delving into the Institute's strategic vision, focused on using data science and AI to address global challenges in environment, health, and security. They examine the role of open source contributions in enhancing the ethical, accessible, and impactful uses of AI. The episode highlights various projects, such as The Turing Way, and the importance of community building, inclusive research practices, and the ethical considerations of AI. They also discuss the integration of CHAOSS metrics in their work and explore future projects and initiatives at The Alan Turing Institute. Press download now to hear more!


    [00:02:58] Kirstie gives an overview of The Turing Institute’s strategic vision and explains the three missions.


    [00:06:22] Aida talks about the importance of communicating with organizations to align on a shared mission and the impact and value of money of publicly funded projects.


    [00:08:38] Malvika brings in the stakeholders ensuring that users, communities, and patients have a say in AI development and empowering educators to incorporate AI, also she talks about working across different projects like Data Science Without Borders and BridgeAI, to accelerate AI’s impact on health and SME’s.


    [00:11:02] The conversation switches to embracing ethical AI usage and encouraging others to do the same. Kirstie details the ethical components of AI using the SAFE-D approach: Safety and sustainability, Accountability, Fairness and non-discrimination, Explainability and transparency, and Data quality, integrity, protection, and privacy.


    [00:17:17] Malvika talks about the importance of considering the societal impact of research at The Turing Institute, she highlights the differences between the EU AI Act and the open source community approach and emphasizes that users should know their rights regarding data collection and sharing.


    [00:19:49] Aida tells us about a case study on A/B street, an open source street planning tool. They partnered up with Bristol City Council and used this tool to facilitate community involvement in urban planning decisions.


    [00:23:52] Aida mentions having conversation she’s been in has focused on at Turing on democratizing technology to reach a broader set of end users.


    [00:24:14] Dawn loves Turing’s collaborative approach and acknowledges the challenges in making AI and data science intuitive for everyone.


    [00:24:54] Kirstie discusses the difficulty of meaningful stakeholder engagement. She talks about the importance of being willing to pivot project goals based on community feedback.


    [00:26:51] Alice brings up CHAOSS metrics and inquires how they fit into The Turing Institute’s work. Malvika explains that CHAOSS metrics is one of the only metrics that help them for understanding equity, diversity, and inclusion (EDI) in community health.


    [00:31:00] Dawn highlights the need to combine quantitative metrics with qualitative research. Kirstie shares that data scientists often don’t see their work as part of open source or community led projects. Aida comments on using CHAOSS metrics to justify the impact of open source research funded by taxpayer money.


    [00:36:05] Dawn asks about the future focus areas for The Turing Institute. Kirstie mentions the BridgeAI Initiative to support SMEs in the UK in leveraging data and the expansion of The Turing Way Practitioner Hub to support experts in organizations and foster global knowledge exchange.


    [00:38:28] Aida shares her excitement about a potential incubator at Turing focused on pathways to impact for research. Malvika shares her excitement for professionalization and recognition of various data science role

    • 44 min
    Episode 85: Introducing CHAOSS Practitioner Guides: #1 Responsiveness

    Episode 85: Introducing CHAOSS Practitioner Guides: #1 Responsiveness

    Thank you to the folks at Sustain for providing the hosting account for CHAOSSCast!


    CHAOSScast – Episode 85


    In this episode of CHAOSScast, host Alice Sowerby is joined with Dawn Foster and special guest, Luis Cañas-Díaz from Bitergia. Today, they delve into the Practitioner Guide series created by CHAOSS, particularly focusing on the Responsiveness Guide authored by Dawn. The conversation highlights the challenges people face in interpreting data and metrics within their projects and how the guides aim to provide actionable insights for improvement. Additionally, they touch on the potential risks of misinterpreting metrics and stress the importance of context and direct involvement from project teams to effectively address responsiveness issues. The episode also covers future directions for the guide series and ways the community can contribute and provide feedback. Press download to hear more!


    [00:02:08] Alice asks Dawn to explain the newly launched Practitioner Guide series by CHAOSS. Dawn elaborates on the Practitioner Guides, addressing the community’s struggle with data interpretation and the initiative to provide guidance on metric usage for project improvements.


    [00:05:02] Luis comments on the utility of the Practitioner Guides, emphasizing the need to focus on goals over metrics to avoid data overload.


    [00:05:54] Dawn mentions the feedback received on the guides, particularly from Luis and others in various OSPO working groups.


    [00:07:11] The discussion shifts to the Guide on Responsiveness, with Dawn identifying key metrics like time to first response, time to close, and change request closure ratio.


    [00:08:37] Luis shares the significance of responsiveness metrics in community growth and ensuring fair treatment across organizational contributors.


    [00:09:54] Dawn details how the guides suggest making improvements, noting the importance of understanding context, such as seasonal variations or event-related disruptions, in evaluating responsiveness.


    [00:11:01] We hear some practical tips from Dawn on improving responsiveness, like using templates for contributions to reduce maintainers’ review times and discussing time allocation with maintainers to offload non-critical tasks.


    [00:13:47] Luis emphasizes that metrics highlight things that are happening but require deeper investigation to understand the underlying issues.


    [00:15:05] Dawn discusses strategies to improve project responsiveness, such as recruiting more maintainers and contributors. She warns against simply pressuring existing maintainers to increase responsiveness, which can lead to burnout and does not address the root cause of delays.


    [00:17:33] Luis shares experiences from conversations with managers about the pressures of responding to community needs. He warns against using metrics to measure productivity, as it can lead people to manipulate their behavior to look good on metrics rather than genuinely improving their work. Also, he tells us about a book he read that he liked called, “The Tyranny of Metrics.”


    [00:19:42] Luis explains the critical role of responsiveness on onboarding and retaining new community members, emphasizing the importance of prompt feedback to make newcomers feel valued.


    [00:20:26] Dawn stresses the impact of responsiveness on new contributors, noting that delays or lack of feedback can permanently discourage them from participating in the project.


    [00:21:38] Dawn advises patience and persistence in improving responsiveness, emphasizing that it is a long-term effort.


    [00:22:50] Alice inquires about the future directions for the Practitioner Guides series, and Dawn reveals plans for additional guides on topics like software development practices and community activity and encourages community involvement in creating new guidelines. She discusses possibilities for customizing guides for specific organizational needs, such as what Comcast has done.


    [00:26:32] Luis suggests ex

    • 31 min
    Episode 84: Community Viability - how Verizon thinks about OSS risk

    Episode 84: Community Viability - how Verizon thinks about OSS risk

    Thank you to the folks at Sustain for providing the hosting account for CHAOSSCast!


    CHAOSScast – Episode 84


    In this episode of CHAOSScast, Dawn Foster, Matt Germonprez, Alice Sowerby, and guest Gary White, Principal Engineer at Verizon’s OSPO office, delve into the world of viability metrics models developed for assessing the risks associated with using open source software components. Gary explains the creation process of these models, their application within Verizon for software evaluation, and the significance of engaging with the open source community to enhance project viability. The conversations also explore the challenges and considerations in deploying these metrics within organizations, emphasizing the blend of policy enforcement and cultural influence to manage open source software dependencies effectively. Press download now to hear more!


    [00:02:30] Dawn asks Gary to elaborate on the choice of Verizon for the viability metrics models. He explains the creation of the first four metrics models for assessing risks in open source software components, and the development of a fifth model to simplify the original four. Also, he explains the importance of being quantitative about software library choices, influenced by a research paper from Carnegie Mellon and existing CHAOSS metrics.


    [00:05:16] Gary mentions using Augur for metrics collection at Verizon and the benefits of tracking with CHAOSS tools.


    [00:06:27] Matt asks Gary to provide an example of a metric used in the governance model, and he talks about the Libyears metric, which helps understand the total years behind all dependencies of a component, reflecting the risk associated with aging dependencies.


    [00:07:50] Alice wonders about the “happy region” for the Libyears metric and its implications on risk assessment.


    [00:09:25] Dawn asks Gary to discuss how these metrics are utilized at Verizon. He describes using these metrics to evaluate the viability of software at Verizon, including different use cases and dependency risks.


    [00:11:39] Alice explores how Gary considers the context in which components are used when calculating risk.


    [00:13:24] Matt asks about the process of engaging with the metrics models within the organization. Gary explains that the approach depends on several factors such as severity of finding, buy-in from the organization, and the organizational structure of the OSPO, and details the use of specific resources like the “endoflife.date.”


    [00:18:07] Gary outlines how Verizon integrates risk management frameworks with organizational tools like dashboards to disseminate collected data and foster buy-in for automated systems.
    [00:21:16] Alice asks Gary for advice on engaging with open source communities when viability metrics indicate potential issues. Gary highlights the importance of community and governance metrics in driving organizational support for critical open source projects.


    [00:22:43] Gary shares his experience in the CHAOSS group, emphasizing the value of diverse opinions in developing and validating viability metrics models.


    [00:24:33] Dawn highlights the significance of the discussions on viability and risk in the OSPO working group, emphasizing how these are critical concerns for OSPO leaders.


    [00:25:24] Dawn inquires about how Verizon uses CHAOSS metrics beyond viability assessment, particularly in open source management. Gary discusses leveraging CHAOSS metrics across various teams to judge component use and risk profiles and explains Verizon’s approach to using metrics involving both an educational component and a policy component.


    [00:27:33] Gary talks about focusing on the ongoing efforts to integrate and optimize the Augur system at Verizon, acknowledging Sean Goggins for his assistance, and expresses a desire to contribute back to the community, and exploring new metrics to trace and predict significant events in the open source ecosystem.


    Value Adds (Picks) of the

    • 34 min
    Episode 83: Metrics for Organizational and Digital Infrastructure with Edward Vielmetti

    Episode 83: Metrics for Organizational and Digital Infrastructure with Edward Vielmetti

    Thank you to the folks at Sustain for providing the hosting account for CHAOSSCast!


    CHAOSScast – Episode 83


    In this episode of CHAOSScast, Georg and Dawn chat with guest Edward Vielmetti, Developer Partner Manager at Equinix, where he oversees the Open Source Partner Program. Today, they delve into the significance of measuring open source community health using CHAOSS metrics. Edward discusses the importance of providing infrastructure support to open source projects and how Equinix uses CHAOSS metrics to evaluate project health and manage resources efficiently. The discussion also covers the challenges of maintaining open source project health, including governance, code quality, and resources, with insights into predictive metrics and the impact of corporate involvement in open source communities. Press download now to hear more!


    [00:01:36] Edward introduces himself, tells us what he does, provides a background on Equinix, and talks about their dedicated cloud offering and support for open source projects. He discusses the absence of formal CHAOSS metrics at Equinix but mentions they compare them with internal considerations to ensure project health.


    [00:06:24] Edward talks about external factors like internal conflicts or external shocks to the system and the importance of being a stabilizing force.


    [00:9:59] Georg outlines three categories of project health: community activity, code quality, and resources.


    [00:10:58] Edward talks about using spend as a top-line metric for resource adequacy and the importance of rapid build and test cycles for software projects.


    [00:15:33] Georg acknowledges Edward’s comprehensive view, noting the need for specialized infrastructure beyond what hosting platforms like GitHub and GitLab offer. Edward emphasizes that developing certain kinds of software requires direct access to hardware rather than virtualized environments.


    [00:19:06] Dawn brings the conversation back to CHAOSS, mentioning context working groups and Edward’s active participation in the corporate OSPO working group. Edward talks about the challenges at Equinix in forming a formal OSPO and the value of sharing and learning from peers through CHAOSS.


    [00:22:33] Dawn appreciated the diversity of companies in the CHAOSS OSPO working group and the broad exchange of ideas. Edward reflects on his long history with open source, noting the evolution and professionalization of the industry.


    [00:25:32] Georg asks about the future of open source and CHAOSS’s potential role, and Edward mentions the trend of open source projects changing control for financial gain and discusses how CHAOSS could help predict or quickly identify such changes. He proposes the collection of certain metrics, such as the number of legal notices a project receives, as indicators of the project’s environment.


    [00:29:44] Edward shares a story, without taking sides, about Terraform relicensing by HashiCorp and the subsequent forks of Terraform, focusing on the OpenTofu fork and the licensing issues around patching from differently licensed software.


    [00:34:05] Georg discusses observing early risk indicators in projects, such as when a single company’s influence increases, potentially raising the risk of unilateral changes, and he expresses a desire for a predictive model for open source project trajectories.


    [00:35:44] Dawn calls such predictive modeling difficult due to the rarity of events and stresses the importance of community participation for early detection of issues.


    [00:37:53] Georg brings up the Linkerd project’s approach to engaging with the vendor ecosystem and the changes in their release strategy to encourage commercial support, and Edward compares this with CentOS’s transition to CentOS Stream.


    [00:41:48] Georg reiterates the value of participation in open source to be aware of and potentially influence project developments.


    Value Adds (Picks) of the week:



    [00:42:29] Georg’s pick is

    • 45 min
    Episode 82: The AI Conundrum: Implications for OSPOs

    Episode 82: The AI Conundrum: Implications for OSPOs

    Thank you to the folks at Sustain for providing the hosting account for CHAOSSCast!


    CHAOSScast – Episode 82


    In this episode of CHAOSScast, host Dawn Foster brings together Matt Germonprez, Brian Proffitt, and Ashley Wolf to discuss the implications of Artificial Intelligence (AI) on Open Source Program Offices (OSPOs), including policy considerations, the potential for AI-driven contributions to create workload for maintainers, and the quality of contributions. They also touch on the use of AI internally within companies versus contributing back to the open source community, the importance of distinguishing between human and AI contributions, and the potential benefits and challenges AI introduces to open source project health and community metrics. The conversation strikes a balance between optimism for AI’s benefits and caution for its governance, leaving us to ponder the future of open source in an AI-integrated world. Press download to hear more!


    [00:03:20] The discussion begins on the role of OSPOs in AI policy making, and Ashley emphasizes the importance of OSPOs in providing guidance on generative AI tools usage and contributions within their organizations.


    [00:05:17] Brian observes a conservative reflex towards AI in OSPOs, noting issues around copyright, trust, and the status of AI as not truly open source.


    [00:07:10] Matt inquires about aligning different policies from various organizations, like GitHub and Red Hat, with those from the Linux Foundation and Apache Software Foundation regarding generative AI. Brian speaks about Red Hat’s approach to first figure out their policies before seeking alignment with others.


    [00:06:45] Ashley appreciates the publicly available AI policies from the Apache and Linux Foundations, noting that GitHub’s policies have been informed by long-term thinking and community feedback.


    [00:10:34] Dawn asks about potential internal conflict for GitHub employees given different AI policies at GitHub and other organizations like CNCF and Apache.


    [00:12:32] Ashley and Brian talk about what they see as the benefits of AI for OSPOs, and how AI can help scale OSPO support and act as a sounding board for new ideas.


    [00:15:32] Matt proposes a scenario where generative AI might increase individual contributions to high-profile projects like Kubernetes for personal gain, potentially burdening maintainers.


    [00:18:45] Dawn mentions Daniel Stenberg of cURL who has seen an influx of low-quality issues from AI models, Ashley points out the problem of “drive-by-contributions” and spam, particularly during events like Hacktoberfest, and emphasizes the role of OSPOs in education about responsible contributions, and Brian discusses potential issues with AI contributions leading to homogenization and the increased risk of widespread security vulnerabilities.


    [00:22:33] Matt raises another scenario questioning if companies might use generative AI internally as an alternative to open source for smaller issues without contributing back to the community. Ashley states 92% of developers are using AI code generation tools and cautions against creating code in a vacuum, and Brian talks about Red Hat’s approach.


    [00:27:18] Dawn discusses the impact of generative AI on companies that are primarily consumers of open source, rarely contributing back, questioning if they might start using AI to make changes instead of contributing. Brian suggests there might be a mixed impact and Ashley optimistically hopes the time saved using AI tools will be redirected to contribute back to open source.


    [00:29:49] Brian discusses the state of open source AI, highlighting the lack of a formal definition and ongoing efforts by the OSI and other groups to establish one, and recommends a fascinating article he read from Knowing Machines. Ashley emphasizes the importance of not misusing the term open source for AI until a formal definition is established.


    [00:32:42] Matt inquires how me

    • 39 min
    Episode 81: Managing Federal CHAOSS at CMS.gov

    Episode 81: Managing Federal CHAOSS at CMS.gov

    Thank you to the folks at Sustain for providing the hosting account for CHAOSSCast!


    CHAOSScast – Episode 80


    On today’s episode of CHAOSScast, we focus on the experiences and initiatives of the Open Source Program Office at the U.S. Centers for Medicare and Medicaid Services (CMS). Host Dawn Foster is joined by Sean Goggins along with guests, Remy DeCausemaker, Natalia Luzuriaga, Isaac Milarsky, and Aayat Ali, all from various backgrounds within the CMS, who share insights into their efforts in maintaining and promoting an open source culture within federal services. Key discussion points include the launch of the CMS’s first open source program office, the development of a maturity model framework to evaluate open source projects, the creation of tools such as Repo Scaffolder and Duplifier to support open source practices, and efforts towards open source software security. This episode emphasizes the distinct aspects of opens source work in government settings compared to the private sector and highlights upcoming presentations at conferences. Download this episode now to hear more!


    [00:02:21] Dawn asks about the team’s work at the U.S. Centers for Medicare and Medicaid Services. We start with Remy, who explains the launch of the first open source program office at a federal agency in the U.S. and details CMS’s mission to improve healthcare experience for over 150 million people and the role of the digital service within CMS.


    [00:05:36] Natalia discusses the maturity model framework developed to assess the open source maturity level of projects. She describes a “Repo Scaffolder” tool created in collaboration with the U.S. digital response to help projects align with the majority model, and she speaks about additional features for public repositories to aid in development.


    [00:10:51] Isaac takes over, explaining how they use Auger metrics and “Nadia labeling” to categorize projects and encourage the adoption of their maturity model. He details a metrics website that provides visual representations of project health and activity and introduces “Duplifier,” a deduplication tool for healthcare data, which uses an open source library called Splink.


    [00:15:14] Sean inquires how they actualize their user needs in metrics visualization and about the process that informs the creation of these visual metrics. Isaac addresses front-end design aspects of metric visualization and the importance of making the metrics understandable at a glance. Natalia emphasizing designing for both technical and non-technical stakeholders, ensuring metrics are clear and understandable.


    [00:17:44] Aayat discusses her role in strategy development and the creation of a CMS OSPO guide. She emphasizes advocacy withing CMS for open source and plans to conduct workshops and usability testing to determine which metrics are most valuable to stakeholders.


    [00:19:23] Remy talks about consulting with the chief information security officer and the chief information officer for internal metric priorities and engaging with an external OSPO metrics working group convened by CHAOSS for broader insights.


    [00:20:47] Dawn asks Remy for more details on the differences with government engagement in open source to the corporate environments. Remy describes the early journey of OSPOs at the federal level and contrasts it with his private sector experience.


    [00:25:18] Sean asks about what success would look like a year from now for the


    OSPO group’s work. Remy acknowledges the limited four-year term for digital service members, emphasizing the urgency to execute and make an impact within the next year. He highlights the transformative impact of Isaac and Natalia’s entrance into the program and the successful shipping of the metrics website, a deduplication tool, and other repositories.


    [00:27:50] Isaac envisions success as propagating maturity models and open source standards throughout the government, demonstrating value

    • 40 min

Top Podcasts In Technology

All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Waveform: The MKBHD Podcast
Vox Media Podcast Network
X-Raid Podcast
De Zee X
TED Radio Hour
NPR
Tech Lead Journal
Henry Suryawirawan

You Might Also Like