The genius of the metaphor is that it helps us understand complex models we can’t articulate. When the Hindu supremacists say “Bharat Mata”, for example, they are presenting a well understood cognitive model that explains nourishment and loyalty and using it to explain nationalism. When they say “Gau Mata”, the idea of nourishment and our loyalty to it is brought to the cow.
In many ways the metaphor epitomises the kind of intelligence computers struggle with. Reasoning by metaphor is a form of reasoning that’s flexible and nimble, unfixed and still resilient. It uses ideas from outside systems to explain systems, maps a cognitive model from a familiar context to another. We, human beings, can do this because a number of things came together for us: we have very sophisticated abilities to speak, we recognise patterns, we understand the unspoken, we can imagine what isn’t in front of us, we have a pool of shared human experience (another metaphor!), we can see concepts and we can see relationships between concepts.
So, what does human intelligence do that machine intelligence can’t? What can they both not do?
In this episode, we explore the limits of artificial intelligence. We speak to Shubham Bindlish who runs an AI firm that scrapes cricket data to help make predictions for fantasy cricket. We also speak to Joseph Paul Cohen who has built AI that can diagnose diseases from chest x-rays. They shed light on how both their AI infers data and the limitations that come with it.
See acast.com/privacy for privacy and opt-out information.