Knowledge Graph Insights

Larry Swanson

Interviews with experts on semantic technology, ontology design and engineering, linked data, and the semantic web.

集數

  1. 8月20日

    Casey Hart: The Philosophical Foundations of Ontology Practice – Episode 38

    Casey Hart Ontology engineering has its roots in the idea of ontology as defined by classical philosophers. Casey Hart sees many other connections between professional ontology practice and the academic discipline of philosophy and shows how concepts like epistemology, metaphysics, and rhetoric are relevant to both knowledge graphs and AI technology in general. We talked about: his work as a lead ontologist at Ford and as an ontology consultant his academic background in philosophy the variety of pathways into ontology practice the philosophical principles like metaphysics, epistemology, and logic that inform the practice of ontology his history with the the Cyc project and employment at Cycorp how he re-uses classes like "category" and similar concepts from upper ontologies like gist his definition of "AI" - including his assertion that we should use term to talk about a practice, not a particular technology his reminder that ontologies are models and like all models can oversimplify reality Casey's bio Casey Hart is the lead ontologist for Ford, runs an ontology consultancy, and pilots a growing YouTube channel. He is enthusiastic about philosophy and ontology evangelism. After earning his PhD in philosophy from the University of Wisconsin-Madison (specializing in epistemology and the philosophy of science), he found himself in the private sector at Cycorp. Along his professional career, he has worked in several domains: healthcare, oil & gas, automotive, climate science, agriculture, and retail, among others. Casey believes strongly that ontology should be fun, accessible, resemble what is being modelled, and just as complex as it needs to be. He lives in the Pacific Northwest with his wife and three daughters and a few farm animals. Connect with Casey online LinkedIn ontologyexplained at gmail dot com Ontology Explained YouTube channel Video Here’s the video version of our conversation: https://youtu.be/siqwNncPPBw Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 38. When the subject of philosophy comes up in relation to ontology practice, it's typically cited as the origin of the term, and then the subject is dropped. Casey Hart sees many other connections between ontology practice and it its philosophical roots. In addition to logic as the foundation of OWL, he shows how philosophy concepts like epistemology, metaphysics, and rhetoric are relevant to both knowledge graphs and AI technology in general. Interview transcript Larry: Hi, everyone. Welcome to episode number 38 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Casey Hart. Casey has a really cool YouTube channel on the philosophy behind ontology engineering and ontology practice. Casey is currently an ontologist at Ford, the motor car company. So welcome Casey, tell the folks a little bit more about what you're up to these days. Casey: Hi. Thanks, Larry. I'm super excited to be here. I've listened to the podcast, and man, your intro sounds so smooth. I was like, "I wonder how many edits that takes." No, you just fire them off, that's beautiful. Casey: Yeah, so like you said, these days I'm the ontologist at Ford, so building out data models for sensor data and vehicle information, all those sorts of fun things. I am also working as a consultant. I've got a couple of different startup healthcare companies and some cybersecurity stuff, little things around the edge. I love evangelizing ontology, talking about it and thinking about it. And as you mentioned for the YouTube channel, that's been my creative outlet. My background is in philosophy and I was interested in, I got my PhD in philosophy, I was going to teach it. You write lots of papers, those sorts of things, and I miss that to some extent getting out into industry, and that's been my way back in to, all right, come up with an idea,

    39 分鐘
  2. 8月4日

    Chris Mungall: Collaborative Knowledge Graphs in the Life Sciences – Episode 37

    Chris Mungall Capturing knowledge in the life sciences is a huge undertaking. The scope of the field extends from the atomic level up to planetary-scale ecosystems, and a wide variety of disciplines collaborate on the research. Chris Mungall and his colleagues at the Berkeley Lab tackle this knowledge-management challenge with well-honed collaborative methods and AI-augmented computational tooling that streamlines the organization of these precious scientific discoveries. We talked about: his biosciences and genetics work at the Berkeley Lab how the complexity and the volume of biological data he works with led to his use of knowledge graphs his early background in AI his contributions to the gene ontology the unique role of bio-curators, non-semantic-tech biologists, in the biological ontology community the diverse range of collaborators involved in building knowledge graphs in the life sciences the variety of collaborative working styles that groups of bio-creators and ontologists have created some key lessons learned in his long history of working on large-scale, collaborative ontologies, key among them, meeting people where they are some of the facilitation methods used in his work, tools like GitHub, for example his group's decision early on to commit to version tracking, making change-tracking an entity in their technical infrastructure how he surfaces and manages the tacit assumptions that diverse collaborators bring to ontology projects how he's using AI and agentic technology in his ontology practice how their decision to adopt versioning early on has enabled them to more easily develop benchmarks and evaluations some of the successes he's had using AI in his knowledge graph work, for example, code refactoring, provenance tracking, and repairing broken links Chris's bio Chris Mungall is Department Head of Biosystems Data Science at Lawrence Berkeley National Laboratory. His research interests center around the capture, computational integration, and dissemination of biological research data, and the development of methods for using this data to elucidate biological mechanisms underpinning the health of humans and of the planet. He is particularly interested in developing and applying knowledge-based AI methods, particularly Knowledge Graphs (KGs) as an approach for integrating and reasoning over multiple types of data. Dr. Mungall and his team have led the creation of key biological ontologies for the integration of resources covering gene function, anatomy, phenotypes and the environment. He is a principal investigator on major projects such as the Gene Ontology (GO) Consortium, the Monarch Initiative, the NCATS Biomedical Data Translator, and the National Microbiome Data Collaborative project. Connect with Chris online LinkedIn Berkeley Lab Video Here’s the video version of our conversation: https://youtu.be/HMXKFQgjo5E Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 37. The span of the life sciences extends from the atomic level up to planetary ecosystems. Combine this scale and complexity with the variety of collaborators who manage information about the field, and you end up with a huge knowledge-management challenge. Chris Mungall and his colleagues have developed collaborative methods and computational tooling that enable the construction of ontologies and knowledge graphs that capture this crucial scientific knowledge. Interview transcript Larry: Hi everyone. Welcome to episode number 37 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Chris Mungall. Chris is a computational scientist working in the biosciences at the Lawrence Berkeley National Laboratory. Many people just call it the Berkeley Lab. He's the principal investigator in a group there, has his own lab working on a bunch of interesting stuff, which we're going to talk about today.

    33 分鐘
  3. 7月21日

    Emeka Okoye: Exploring the Semantic Web with the Model Context Protocol – Episode 36

    Emeka Okoye Semantic technologies permit powerful connections across a variety of linked data resources across the web. Until recently, developers had to learn the RDF language to discover and use these resources. Leveraging the new Model Context Protocol (MCP) and LLM-powered natural-language interfaces, Emeka Okoye has created the RDF Explorer, an MCP service that lets any developer surf the semantic web without having to learn its specialized language. We talked about: his long history in knowledge engineering and AI agents his deep involvement in the business and technology communities in Nigeria, including founding the country's first internet startup how he was building knowledge graphs before Google coined the term an overview of MCP, the Model Context Protocol, and its benefits the RDF Explorer MCP server he has developed how the MCP protocol and helps ease some of the challenges that semantic web developers have traditionally faced the capabilities of his RDF Explorer: facilitating communication between AI applications, language models, and RDF data enabling graph exploration and graph data analysis via SPARQL queries browsing, accessing, and evaluating linked-open-data RDF resources the origins of RDF Explorer in his attempt to improve ontology engineering tooling his objections to "vibe ontology" creation the ability of RDF Explorer to let non-RDF developers users access knowledge graph data how accessing knowledge graph data addresses the problem of the static nature of the data in language models the natural connections he sees between neural network AI and symbolic AI like knowledge graphs, and the tech tribalism he sees in the broader AI world that prevents others from seeing them how the ability of LLMs to predict likely language isn't true intelligence or actual knowledge some of the lessons he learned by building the RDF Explorer, e.g., how the MCP protocol removes a lot of the complexity in building hybrid AI solutions how MCP helps him validate the ontologies he creates Emeka's bio Emeka is a Knowledge Engineer, Semantic Architect, and Generative AI Engineer who leverages his over two decades of expertise in ontology and knowledge engineering and software development to architect, develop, and deploy innovative, data-centric AI products and intelligent cognitive systems to enable organizations in their Digital Transformation journey to enhance their data infrastructure, harness their data assets for high-level cognitive tasks and decision-making processes, and drive innovation and efficiency enroute to achieving their organizational goals. Emeka’s experience has embraced a breadth of technologies his primary focus being solution design, engineering and product development while working with a cross section of professionals across various cultures in Africa and Europe in solving problems at a complex level. Emeka can understand and explain technologies from deep diving under the hood to the value proposition level. Connect with Emeka online LinkedIn Making Knowledge Graphs Accessible: My Journey with MCP and RDF Explorer RDF Explorer (GitHub) Video Here’s the video version of our conversation: https://youtu.be/GK4cqtgYRfA Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 36. The widespread adoption of semantic technologies has created a variety of linked data resources on the web. Until recently, you had to learn semantic tools to access that data. The arrival of LLMs, with their conversational interfaces and ability to translate natural language into knowledge graph queries, combined with the new Model Context Protocol, has empowered semantic web experts like Emeka Okoye to build tools that let any developer surf the semantic web. Interview transcript Larry: Hi, everyone. Welcome to episode number 36 of the Knowledge Graph Insights podcast.

    35 分鐘
  4. 7月6日

    Tom Plasterer: The Origins of FAIR Data Practices – Episode 35

    Tom Plasterer Shortly after the semantic web was introduced, the demand for discoverable and shareable data arose in both research and industry. Tom Plasterer was instrumental in the early conception and creation of the FAIR data principle, the idea that data should be findable, accessible, interoperable, and reusable. From its origins in the semantic web community, scientific research, and the pharmaceutical industry, the FAIR data idea has spread across academia, research, industry, and enterprises of all kinds. We talked about: his recent move from a big pharma company to Exponential Data where he leads the knowledge graph and FAIR data practices the direct line from the original semantic web concept to FAIR data principles the scope of the FAIR acronym, not just four concepts, but actually 15 how the accessibility requirement in FAIR distinguishes the standard from the open data the role of knowledge graphs in the implementation of a FAIR data program the intentional omission of prescribed implementations in the development of FAIR and the ensuing variety of implementation patterns how the desire for consensus in the biology community smoothed the development of the FAIR standard the role of knowledge graphs in providing a structure for sharing terminology and other information in a scientific community how his interest in omics led him to computer science and then to the people skills crucial to knowledge graph work the origins of the impetus for FAIR in European scientific research and the pharmaceutical industry the growing adoption of FAIR as enterprises mature their web thinking and vendors offer products to help with implementations the roles of both open science and the accessibility needs in industry contributed to the development of FAIR the interesting new space at the intersection of generative AI and FAIR and knowledge graph the crucial foundational role of FAIR in AI systems Tom's bio Dr. Tom Plasterer is a leading expert in data strategy and bioinformatics, specializing in the application of knowledge graphs and FAIR data principles within life sciences and healthcare. With over two decades of experience in both industry and academia, he has significantly contributed to bioinformatics, systems biology, biomarker discovery, and data stewardship. His entrepreneurial ventures include co-founding PanGenX, a Personalized Medicine/Pharmacogenetics Knowledge Base start-up, and directing Project Planning and Data Interpretation at BG Medicine. During his extensive tenure at AstraZeneca, he was instrumental in championing Data Centricity, FAIR Data, and Knowledge Graph initiatives across various IT and scientific business units. Currently, Dr. Plasterer serves as the Managing Director of Knowledge Graph and FAIR Data Capability at XponentL Data, where he defines strategy and implements advanced applications of FAIR data, knowledge graphs, and generative AI for the life science and healthcare industries. He is also a prominent figure in the community, having co-founded the Pistoia Alliance FAIR Data Implementation group and serving on its FAIR data advisory board. Additionally, he co-organizes the Health Care and Life Sciences symposium at the Knowledge Graph Conference and is a member of Elsevier’s Corporate Advisory Board. Connect with Tom online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/Lt9Dc0Jvr4c Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 35. With the introduction of semantic web technologies in the early 2000s, the World Wide Web began to look something like a giant database. And with great data, comes great responsibility. In response to the needs of data stewards and consumers across science, industry, and technology, the FAIR data principle - F A I R - was introduced. Tom Plasterer was instrumental in the early efforts to make web data findable,

    32 分鐘
  5. 6月11日

    Mara Inglezakis Owens: A People-Loving Enterprise Architect – Episode 34

    Mara Inglezakis Owens Mara Inglezakis Owens brings a human-centered focus to her work as an enterprise architect at a major US airline. Drawing on her background in the humanities and her pragmatic approach to business, she has developed a practice that embodies both "digital anthropology" and product thinking. The result is a knowledge architecture that works for its users and consistently demonstrates its value to key stakeholders. We talked about: her role as an enterprise architect at a major US airline how her background as a humanities scholar, and especially as a rhetoric teacher, prepared her for her current work as a trusted business advisor some important mentoring she received early in her career how "digital anthropology" and product thinking fit into her enterprise architecture practice how she demonstrates the financial value of her work to executives and other stakeholders her thoughtful approach to the digitalization process and systems design the importance of documentation in knowledge engineering work how to sort out and document stakeholders' self-reports versus their actual behavior the scope of her knowledge modeling work, not just physical objects in the world, but also processes and procedures two important lessons she's learned over her career: don't be afraid to justify financial investment in your work, and "don't be so attached to an ideal outcome that you miss the best possible" Mara's bio Mara Inglezakis Owens is an enterprise architect who specializes in digitalization and knowledge management. She has deep experience in end-to-end supply chain as well as in planning, product, and program management. Mara’s background is in epistemology (history and philosophy of science, information science, and literature), which gives a unique, humanistic flavor to her practice. When she is not working, Mara enjoys aviation, creative writing, gardening, and raising her children. She lives in Minneapolis. Connect with Mara online LinkedIn email: mara dot inglezakis dot owens at gmail dot com Video Here’s the video version of our conversation: https://youtu.be/d8JUkq8bMIc Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 34. When think about architecting knowledge systems for a giant business like a global airline, you might picture huge databases and complex spaghetti diagrams of enterprise architectures. These do in fact exist, but the thing that actually makes these systems work is an understanding of the needs of the people who use, manage, and finance them. That's the important, human-focused work that Mara Inglezakis Owens does as an enterprise architect at a major US airline. Interview transcript Larry: Hi, everyone. Welcome to episode 34 of the Knowledge Graph Insights Podcast. I am really delighted today to welcome to the show, Mara, I'm going to get this right, Inglezakis Owens. She's an enterprise architect at a major US airline. So, welcome, Mara. Tell the folks a little bit more about what you're up to these days. Mara: Hi, everybody. My name's Mara. And these days I am achieving my childhood dream of working in aviation, not as a pilot, but that'll happen, but as an enterprise architect. I've been doing EA, also data and information architecture, across the whole scope of supply chain for about 10 years, everything from commodity sourcing to SaaS, software as a service, to now logistics. And a lot of my days, I spend interviewing subject matter experts, convincing business leaders they should do stuff, and on my best days, I get to crawl around on my hands and knees in an airplane hangar. Larry: Oh, fun. That is ... Yeah. I didn't know ... I knew that there's that great picture of you sitting in the jet engine, but I didn't realize this was the fulfillment of a childhood dream. That's awesome. But everything you've just said ties in so well to the tagline on your LinkedIn pro...

    31 分鐘
  6. 5月22日

    Frank van Harmelen: Hybrid Human-Machine Intelligence for the AI Age – Episode 33

    Frank van Harmelen Much of the conversation around AI architectures lately is about neuro-symbolic systems that combine neural-network learning tech like LLMs and symbolic AI like knowledge graphs. Frank van Harmelen's research has followed this path, but he puts all of his AI research in the larger context of how these technical systems can best support people. While some in the AI world seek to replace humans with machines, Frank focuses on AI systems that collaborate effectively with people. We talked about: his role as a professor of AI at the Vrije Universiteit in Amsterdam how rapid change in the AI world has affected the 10-year, €20-million Hybrid Intelligence Centre research he oversees the focus of his research on the hybrid combination of human and machine intelligence how the introduction of conversational interfaces has advance AI-human collaboration a few of the benefits of hybrid human-AI collaboration the importance of a shared worldview in any collaborative effort the role of the psychological concept of "theory of mind" in hybrid human-AI systems the emergence of neuro-symbolic solutions how he helps his students see the differences between systems 1 and 2 thinking and its relevance in AI systems his role in establishing the foundations of the semantic web the challenges of running a program that spans seven universities and employs dozens of faculty and PhD students some examples of use cases for hybrid AI-human systems his take on agentic AI, and the importance of humans in agent systems some classic research on multi-agent computer systems the four research challenges - collaboration, adaptation, responsibility, and explainability - they are tackling in their hybrid intelligence research his take on the different approaches to AI in Europe, the US, and China the matrix structure he uses to allocate people and resources to three key research areas: problems, solutions, and evaluation his belief that "AI is there to collaborate with people and not to replace us" Frank's bio Since 2000 Frank van Harmelen has played a leading role in the development of the Semantic Web. He is a co-designer of the Web Ontology Language OWL, which has become a worldwide standard. He co-authored the first academic textbook of the field, and was one of the architects of Sesame, an RDF storage and retrieval engine, which is in wide academic and industrial use. This work received the 10-year impact award at the International Semantic Web Conference. Linked Open Data and Knowledge Graphs are important spin-offs from this work. Since 2020, Frank is is scientific director of the Hybrid Intelligence Centre, where 50 PhD students and as many faculty members from 7 Dutch universities investigate AI systems that collaborate with people instead of replacing them. The large scale of modern knowledge graphs that contain hundreds of millions of entities and relationships (made possible partly by the work of Van Harmelen and his team) opened the door to combine these symbolic knowledge representations with machine learning. Since 2018, Frank has pivoted his research group from purely symbolic Knowledge Representation to Neuro-Symbolic forms of AI. Connect with Frank online Hybrid Intelligence Centre Video Here’s the video version of our conversation: https://youtu.be/ox20_l67R7I Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 33. As the AI landscape has evolved over the past few years, hybrid architectures that combine LLMs, knowledge graphs, and other AI technology have become the norm. Frank van Harmelen argues that the ultimate hybrid system must also include humans. He's running a 10-year, €20 million research program in the Netherlands to explore exactly this. His Hybrid Intelligence Centre investigates AI systems that collaborate with people instead of replacing them. Interview transcript Larry: Hi,

    30 分鐘
  7. 5月7日

    Denny Vrandečić: Connecting the World’s Knowledge with Abstract Wikipedia – Episode 32

    Denny Vrandečić As the founder of Wikidata, Denny Vrandečić has thought a lot about how to better connect the world's knowledge. His current project is Abstract Wikipedia, an initiative that aims to let anyone anywhere on the planet contribute to, and benefit from, the world's collective knowledge, in their native language. It's an ambitious goal, but - inspired by the success of other contributor-driven Wikimedia Foundation projects - Denny is confident that community can make it happen We talked about: his work as Head of Special Projects at the Wikimedia Foundation and his current projects: Wikifunctions and Abstract Wikipedia the origin story of his first project at Wikimedia - Wikidata a precursor project that informed Wikidata - Semantic MediaWiki the resounding success of the Wikidata project, the most edited wiki in the world, with half a million contributors how the need for more expressivity than Wikidata offers led to the idea for Abstract Wikipedia an overview of the Abstract Wikipedia project the abstract language-independent notation that underlies Abstract Wikipedia how Abstract Wikipedia will permit almost instant updating of Wikipedia pages with the facts it provides the capability of Abstract Wikipedia to permit both editing and use of knowledge in an author's native language their exploration of using LLMs to use natural language to create structured representations of knowledge how the design of Abstract Wikipedia encourages and facilitates contributions to the project the Wikifunctions project, a necessary precondition to Abstract Wikipedia the role of Wikidata as the Rosetta Stone of the web some background on the Wikifunctions project the community outreach work that Wikimedia Foundation does and the role of the community in the development of Abstract Wikipedia and Wikifunctions the technical foundations for his how to contribute to Wikimedia Foundation projects his goal to remove language barriers to allow all people to work together in a shared knowledge space a reminder that Tim Berners-Lee's original web browser included an editing function Denny's bio Denny Vrandečić is Head of Special Projects at the Wikimedia Foundation, leading the development of Wikifunctions and Abstract Wikipedia. He is the founder of Wikidata, co-creator of Semantic MediaWiki, and former elected member of the Wikimedia Foundation Board of Trustees. He worked for Google on the Google Knowledge Graph. He has a PhD in Semantic Web and Knowledge Representation from the Karlsruhe Institute of Technology. Connect with Denny online user Denny at Wikimedia Wikidata profile Mastodon LinkedIn email: denny at wikimedia dot org Resources mentioned in this interview Wikimedia Foundation Wikidata Semantic MediaWiki Wikidata: The Making Of Wikifunctions Abstract Wikipedia Meta-Wiki Video Here’s the video version of our conversation: https://youtu.be/iB6luu0w_Jk Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 32. The original plan for the World Wide Web was that it would be a two-way street, with opportunities to both discover and share knowledge. That promise was lost early on - and then restored a few years later when Wikipedia added an "edit" button to the internet. Denny Vrandečić is working to make that edit function even more powerful with Abstract Wikipedia, an innovative platform that lets web citizens both create and consume the world's knowledge, in their own language. Interview transcript Larry: Hi, everyone. Welcome to episode number 32 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Denny Vrandecic. Denny is best known as the founder of Wikidata, which we'll talk about more in just a minute. He's currently the Head of Special Projects at the Wikimedia Foundation. He's also a visiting professor at King's College Lo...

    33 分鐘
  8. 4月30日

    Charles Ivie: The Rousing Success of the Semantic Web “Failure” – Episode 31

    Charles Ivie Since the semantic web was introduced almost 25 years ago, many have dismissed it as a failure. Charles Ivie shows that the RDF standard and the knowledge-representation technology built on it have actually been quite successful. More than half of the world's web pages now share semantic annotations and the widespread adoption of knowledge graphs in enterprises and media companies is only growing as enterprise AI architectures mature. We talked about: his long work history in the knowledge graph world his observation that the semantic web is "the most catastrophically successful thing which people have called a failure" some of the measures of the success of the semantic web: ubiquitous RDF annotations in web pages, numerous knowledge graph deployments in big enterprises and media companies, etc. the long history of knowledge representation the role of RDF as a Rosetta Stone between human knowledge and computing capabilities how the abstraction that RDF permits helps connect different views of knowledge within a domain the need to scope any ontology in a specific domain the role of upper ontologies his transition from computer science and software engineering to semantic web technologies the fundamental role of knowledge representation tech - to help humans communicate information, to innovate, and to solve problems how semantic modeling's focus on humans working things out leads to better solutions than tech-driven approaches his desire to start a conversation around the fundamental upper principles of ontology design and semantic modeling, and his hypothesis that it might look something like a network of taxonomies Charles' bio Charles Ivie is a Senior Graph Architect with the Amazon Neptune team at Amazon Web Services (AWS). With over 15 years of experience in the knowledge graph community, he has been instrumental in designing, leading, and implementing graph solutions across various industries. Connect with Charles online LinkedIn Video Here’s the video version of our conversation: https://youtu.be/1ANaFs-4hE4 Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 31. Since the concept of the semantic web was introduced almost 25 years ago, many have dismissed it as a failure. Charles Ivie points out that it's actually been a rousing success. From the ubiquitous presence of RDF annotations in web pages to the mass adoption of knowledge graphs in enterprises and media companies, the semantic web has been here all along and only continues to grow as more companies discover the benefits of knowledge-representation technology. Interview transcript Larry: Hi everyone. Welcome to episode number 31 of the Knowledge Graph Insights Podcast. I am really happy today to welcome to the show Charles Ivie. Charles is currently a senior graph architect at Amazon's Neptune department. He's been in the graph community for years, worked at the BBC, ran his own consultancies, worked at places like The Telegraph and The Financial Times and places you've heard of. So welcome Charles. Tell the folks a little bit more about what you're up to these days. Charles: Sure. Thanks. Thanks, Larry. Very grateful to be invited on, so thank you for that. And what have I been up to? Yeah, I've been about in the graph industry for about 14 years or something like that now. And these days I am working with the Amazon Neptune team doing everything I can to help people become more successful with their graph implementations and with their projects. And I like to talk at conferences and join things like this and write as much as I can. And occasionally they let me loose on some code too. So that's kind of what I'm up to these days. Larry: Nice. Because you have a background as a software engineer and we will talk more about that later because I think that's really relevant to a lot of what we'll talk about.

    34 分鐘
  9. 4月24日

    Andrea Gioia: Human-Centered Modeling for Data Products – Episode 30

    Andrea Gioia In recent years, data products have emerged as a solution to the enterprise problem of siloed data and knowledge. Andrea Gioia helps his clients build composable, reusable data products so they can capitalize on the value in their data assets. Built around collaboratively developed ontologies, these data products evolve into something that might also be called a knowledge product. We talked about: his work as CTO at Quantyca, a data and metadata management consultancy his description of data products and their lifecycle how the lack of reusability in most data products inspired his current approach to modular, composable data products - and brought him into the world of ontology how focusing on specific data assets facilitates the creation of reusable data products his take on the role of data as a valuable enterprise asset how he accounts for technical metadata and conceptual metadata in his modeling work his preference for a federated model in the development of enterprise ontologies the evolution of his data architecture thinking from a central-governance model to a federated model the importance of including the right variety business stakeholders in the design of the ontology for a knowledge product his observation that semantic model is mostly about people, and working with them to come to agreements about how they each see their domain Andrea's bio Andrea Gioia is a Partner and CTO at Quantyca, a consulting company specializing in data management. He is also a co-founder of blindata.io, a SaaS platform focused on data governance and compliance. With over two decades of experience in the field, Andrea has led cross-functional teams in the successful execution of complex data projects across diverse market sectors, ranging from banking and utilities to retail and industry. In his current role as CTO at Quantyca, Andrea primarily focuses on advisory, helping clients define and execute their data strategy with a strong emphasis on organizational and change management issues. Actively involved in the data community, Andrea is a regular speaker, writer, and author of 'Managing Data as a Product'. Currently, he is the main organizer of the Data Engineering Italian Meetup and leads the Open Data Mesh Initiative. Within this initiative, Andrea has published the data product descriptor open specification and is guiding the development of the open-source ODM Platform to support the automation of the data product lifecycle. Andrea is an active member of DAMA and, since 2023, has been part of the scientific committee of the DAMA Italian Chapter. Connect with Andrea online LinkedIn (#TheDataJoy) Github Video Here’s the video version of our conversation: https://www.youtube.com/watch?v=g34K_kJGZMc Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 30. In the world of enterprise architectures, data products are emerging as a solution to the problem of siloed data and knowledge. As a data and metadata management consultant, Andrea Gioia helps his clients realize the value in their data assets by assembling them into composable, reusable data products. Built around collaboratively developed ontologies, these data products evolve into something that might also be called a knowledge product. Interview transcript Larry: Hi, everyone. Welcome to episode number 30 of the Knowledge Graph Insights podcast. I'm really happy today to welcome to the show Andrea Gioia. Andrea's, he does a lot of stuff. He's a busy guy. He's a partner and the chief technical officer at Quantyca, a consulting firm that works on data and metadata management. He's the founder of Blindata, a SaaS product that goes with his consultancy. I let him talk a little bit more about that. He's the author of the book Managing Data as a Product, and he's also, he comes out of the data heritage but he's now one of these knowledge people like us.

    33 分鐘
  10. 4月16日

    Dave McComb: Semantic Modeling for the Data-Centric Enterprise – Episode 29

    Dave McComb During the course of his 25-year consulting career, Dave McComb has discovered both a foundational problem in enterprise architectures and the solution to it. The problem lies in application-focused software engineering that results in an inefficient explosion of redundant solutions that draw on overlapping data sources. The solution that Dave has introduced is a data-centric architecture approach that treats data like the precious business asset that it is. We talked about: his work as the CEO of Semantic Arts, a prominent semantic technology and knowledge graph consultancy based in the US the application-centric quagmire that most modern enterprises find themselves trapped in data centricity, the antidote to application centricity his early work in semantic modeling how the discovery of the "core model" in an enterprise facilitates modeling and building data-centric enterprise systems the importance of "baby step" approaches and working with actual customer data in enterprise data projects how building to "enduring business themes" rather than to the needs of individual applications creates a more solid foundation for enterprise architectures his current interest in developing a semantic model for the accounting field, drawing on his history in the field and on Semantic Arts' gist upper ontology the importance of the concept of a "commitment" in an accounting model how his approach to financial modeling permits near-real-time reporting his Data-Centric Architecture Forum, a practitioner-focused event held each June in Ft. Collins, Colorado Dave's bio Dave McComb is the CEO of Semantic Arts. In 2000 he co-founded Semantic Arts with the aim of bringing semantic technology to Enterprises. From 2000- 2010 Semantic Arts focused on ways to improve enterprise architecture through ontology modeling and design. Around 2010 Semantic Arts began helping clients more directly with implementation, which led to the use of Knowledge Graphs in Enterprises. Semantic Arts has conducted over 100 successful projects with a number of well know firms including Morgan Stanley, Electronic Arts, Amgen, Standard & Poors, Schneider-Electric, MD Anderson, the International Monetary Fund, Procter & Gamble, Goldman Sachs as well as a number of government agencies. Dave is the author of Semantics in Business Systems (2003), which made the case for using Semantics to improve the design of information systems, Software Wasteland (2018) which points out how application-centric thinking has led to the deplorable state of enterprise systems and The Data-Centric Revolution (2019) which outlines a alternative to the application-centric quagmire. Prior to founding Semantic Arts he was VP of Engineering for Velocity Healthcare, a dot com startup that pioneered the model driven approach to software development. He was granted three patents on the architecture developed at Velocity. Prior to that he was with a small consulting firm: First Principles Consulting. Prior to that he was part of the problem. Connect with Dave online LinkedIn email: mccomb at semanticarts dot com Semantic Arts Resources mentioned in this interview Dave's books: The Data-Centric Revolution: Restoring Sanity to Enterprise Information Systems Software Wasteland: How the Application-Centric Quagmire is Hobbling Our Enterprises Semantics in Business Systems: The Savvy Manager's Guide gist ontology Data-Centric Architecture Forum Video Here’s the video version of our conversation: https://youtu.be/X_hZG7cFOCE Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 29. Every modern enterprise wrestles with its data, trying to get the most out of it. The smartest businesses have figured out that it isn't just "the new oil" - data is the very bedrock of their enterprise architecture. For the past 25 years, Dave McComb has helped companies understand the...

    34 分鐘

評分與評論

5
(滿分 5 顆星)
5 則評分

簡介

Interviews with experts on semantic technology, ontology design and engineering, linked data, and the semantic web.

你可能也會喜歡