Quentin Reul The complementary nature of knowledge graphs and LLMs has become clear, and long-time knowledge engineering professionals like Quentin Reul now routinely combine them in hybrid neuro-symbolic AI systems. While it's tempting to get caught up in the details of rapidly advancing AI technology, Quentin emphasizes the importance of always staying focused on the business problems your systems are solving. We talked about: his extensive background in semantic technologies, dating back to the early 2000s his contribution to the SKOS standard an overview of the strengths and weaknesses of LLMs the importance of entity resolution, especially when working with the general information that LLMs are trained on how LLMs accelerate knowledge graph creation and population his take on the scope of symbolic AI, in which he includes expert systems and rule-based systems his approach to architecting neuro-symbolic systems, which always starts with, and stays focused on, the business problem he's trying to solve his advice to avoid the temptation to start projects with technology, and instead always focus on the problems you're solving the importance of staying abreast of technology developments so that you're always able to craft the most efficient solutions Quentin's bio Dr. Quentin Reul is an AI Strategy & Innovation Executive who bridges the gap between high-level business goals and deep technical implementation. As a Director of AI Strategy & Solutions at expert.ai, he specializes in the convergence of Generative AI, Knowledge Graphs, and Agentic Workflows. His focus is moving companies beyond "PoC Purgatory" into production-grade systems that deliver measurable ROI. Unlike traditional strategists, he remains deeply hands-on, continuously prototyping with emerging AI research to stress-test its real-world impact. He doesn't just advocate for AI; he builds the technical roadmaps that translate the latest lab breakthroughs into safe, scalable, and high-value enterprise solutions. Connect with Quentin online LinkedIn BlueSky YouTube Medium Video Here’s the video version of our conversation: https://youtu.be/J8fgIezoNxE Podcast intro transcript This is the Knowledge Graph Insights podcast, episode number 44. We're far enough along now in the development of both generative AI learning models and symbolic AI technology like knowledge graphs to see the strengths and weaknesses of each. Quentin Reul has worked with both technologies, and the technologies that preceded them, for many years. He now builds systems that combine the best of both types of AI to deliver solutions that make it easier for people to discover and explore the knowledge and information that they need. Interview transcript Larry: Hi, everyone. Welcome to episode number 44 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Quentin Reul. Quentin is the director of AI Strategy and solutions at expert.ai in the US in Chicago. So welcome, Quentin. Tell the folks a little bit more about what you're up to these days. Quentin: Hi, thank you, Larry, for accepting me and getting me on your podcast. So my name is Quentin Reul. I actually have been around the RDF and the knowledge graph since before it was cool in the early 2000. And today, what I'm helping people in news, media, and entertainment is to see how they can leverage all of the unstructured data that they have and make it in a way that can be structured and they can make their content more findable and discoverable as part of what they are offering to their customers. Larry: Nice. And I love that you've been doing this forever. And one of the things we talked about before we went on the air was your early involvement in the SKOS standard. Can you talk a little bit about your little contribution to that project? Quentin: Yeah. So for this, we do know what SKOS stands for Simple Knowledge Organization System. It's a standard that has been created by the W3C standard around 2005. And being at the University of Aberdeen in Scotland, we had a lot of involvement with the W3C voicing the web ontology language and SKOS. Quentin: For SKOS, I was actually working on my PhD, and the idea of my PhD was to look at two ontologies and trying to map entities from one ontology to the entities in the other one. And a lot of the approach that were taken at the time were either leveraging philosophical kind of representation. And there was not really a lot of things that were looking at linguistics. So the approach that we were taking was looking at WordNet and using the structure of WordNet and maps that to the linguistic information, so the labels that were associated with nodes in the taxonomy. Quentin: But to do that, we needed to have a structure that was transitive. And at the time, SKOS only had broader and narrower, and they didn't have the transitive property. So my contribution was to push for the W3C standard and SKOS to include the SKOS broaderTransitive and SKOS narrowerTransitive, so that I could now have that if A broader B and B broader C, that A broader C was also correct, and having that description logic structure that would enable that. Larry: Well, that's so cool. I love that you have your ideas are ensconced in this 20-year-old standard now. But hey, what I wanted to talk about today and really focus on, I know I was excited to get you on the show because you're doing a lot of work in the area of neuro-symbolic AI, the idea of integrating LLMs and other machine learning technologies with knowledge graphs and other symbolic AI stuff. Larry: It's one of those things that everybody's talking about, but I haven't had the chance to talk on the podcast with many people who are actually doing it. So I'm hoping that you can help the listeners take the leap from this conceptual understanding of the natural complimentary nature of them to actually putting them together in an enterprise architecture. I guess maybe start with the strengths and weaknesses of each of the kinds of AI that we're talking about here. Quentin: Yeah. So if we look at the history of AI, symbolic AI was a thing that came up in the '70s and led to the first AI winter and the second AI winter for that matter. But where they were very good was in the structure and the explainability. So if you aren't very well set set of rules or predictive kind of aspect, it would do it consistently, repeatably, and all of that type of things. Quentin: Now, when you were trying to adopt a rule-based system for new data, it would die off because you had never seen that or a new set of rules or a new set of business requirements, it would just not handle that. And that's where machine learning really helped in making that transition to where we are today. Quentin: And the LLM, contributing further to that, in as much as the machine learning was pretty good at dealing with new patterns, as long as it was similar to the data that you were training with. I think one thing that the LLMs have really shine is in the way that it's able to surface things that you were not predicting from the data. Quentin: One thing that I think that we could have predicted or seen from the data if we had LLMs back in 2020 is we could probably have seen the topic of COVID emerging a bit earlier than what it did. And the reason is, it's because it's very good at surfacing things that it's never seen before. It's able at interpreting the language and analyzing the language in its structure. And by the sentence structure, understanding that things are very similar, and you may use different words for them, but now you're able to interpret them. Quentin: So if we think about information retrieval in the '90s, 2000s, and even in the 2010s, the way that we were doing a lot of these things was using control vocabulary, CISORI, or other dictionaries, and they were used to do query expansion. So you add a keyword, you were looking in the dictionaries, the dictionary were doing an expansion, and then you add something else. Quentin: Well, now with the LLM, that kind of expansion is intuitive to the actual LLM because you had seen so many different aspect and so many occurrence of text that it can actually predict and see what these different terms are associated with a holistic concept. Quentin: Now, that's a good thing. On the bad thing, the LLMs don't have ... Well, they have a cutoff point or knowledge cutoff point, which means that when they are trained, they are trained of information that is in the past. So they're not always that great at predicting, especially current event or information about things that are happening today, they're not very good at that. Quentin: I think if I look at the data, generally between the release of a new model and the nature of the data or the cutoff point, it's about six months to a year. This is like going a bit slower now or shorter in terms, but you have to remember that the time that it takes to train these models, we're speaking about days, weeks, and sometime months as opposed to hours with machine learning models. So they're expensive as well from that perspective. Quentin: Another aspect that they don't have, it's a knowledge base to just take a higher level from a knowledge graph, like the knowledge base. So it's not able to disambiguate information in a large corpus. It's very good to do entity linking within the context of one document. Quentin: So if you pass it one document, let's say a financial document, and it refers to Acme as an enterprise, if Acme is mentioned several times during the document, it will infer that there is only one entity and that entity is Acme. Quentin: But now, imagine that you have a group of financial reports, and these financial reports refer to Acme, a bakery in Illinois, and Acme, a construction company in Maryland.