18 min

AI Artificial Intelligence Learning and Reading Human Symbols Part 1 TCAST

    • Technology

People have been working on Artificial Intelligence for years. No, not to create HAL 9000 or Skynet. Well, hopefully not. The goal is to create programs that are better at analyzing data, helping us to make better decisions. 
One of the primary obstacles to that goal is being able to recognize the meaning of symbols. Why should that be so hard? Program various symbols and their meanings into the algorithm and everything should be fine. Right? Wrong. There are some symbols that should be very easy to handle, such as a STOP sign. Program the meaning of the word ‘stop’ and the color and shape of the sign and your automatic car will now be able to stop when it is supposed to. Sounds simple, doesn’t it? You’d think it would be.
Yet, STOP signs have been known to be used as décor as well, or be included on a storefront, or used to say something other than ‘stop at the intersection’. For an automated car trying to navigate busy city streets, this is an extremely daunting task. It has to be able to not just recognize the symbol but to recognize its context. This means taking into account where the symbol is located, its size and the other factors that affect the temporary context. If the vehicle’s AI can’t sort out the context and make a correct judgement as to whether or not to stop the vehicle or wash hands before returning to work then it isn’t all that great. 
Imagine another example. If I give the middle finger to someone, it could be interpreted in a number of ways. One is the obvious, ‘go away, I don’t like you’, another is that it could be humorous. Another could be simply that the finger in question hurts and it’s being held up to display a bruise or cut. We are able to intuit the context of the situation and interpret it accordingly. However, missing just one piece of that will lead to differing interpretations with potentially dangerous results.
Building programs capable of making even these very simple kinds of distinctions is more difficult than it might sound. This is because you can’t literally program every single variable into the software. At some point, your AI software will have to be able to truly function on its own. And to get there, it has to train.
Think of training a dog. When you teach a dog to sit, does it hear the world ‘sit’ and understand its meaning and act accordingly? No. What is going on is that the dog recognizes the word but also is able to understand the context of the command to ‘sit’, such as the tone of voice used, a light push to sit, and even facial expressions. All of that factors into understanding the simple meaning of a simple world. 
If it is that hard to explain how a dog goes about responding to the command to sit, or there is so much to consider in a simple and common hand gesture how much harder is it going to be to get an AI to explain the level of symbolism used in Dante? The answer, virtually impossible.
Fortunately, we don’t need these programs to do the impossible, we just need them to do a little better than the dog. The truth is, that will be hard enough, hard, but doable at least. The AI will need to be taught how to recognize many different symbols before it finally ‘learns’ how to do so and no longer needs to be trained. 
New methods of doing this very thing are being tested right now. What we hope happens is that the complexities of all these systems are understood by the programmers involved. Whether or not they do keep that complexity in mind is the difference between teaching these programs to control through making decisions for us, or programming them to learn and to teach in order to help us make better decisions for ourselves. 
What’s your data worth? www.tartle.co

People have been working on Artificial Intelligence for years. No, not to create HAL 9000 or Skynet. Well, hopefully not. The goal is to create programs that are better at analyzing data, helping us to make better decisions. 
One of the primary obstacles to that goal is being able to recognize the meaning of symbols. Why should that be so hard? Program various symbols and their meanings into the algorithm and everything should be fine. Right? Wrong. There are some symbols that should be very easy to handle, such as a STOP sign. Program the meaning of the word ‘stop’ and the color and shape of the sign and your automatic car will now be able to stop when it is supposed to. Sounds simple, doesn’t it? You’d think it would be.
Yet, STOP signs have been known to be used as décor as well, or be included on a storefront, or used to say something other than ‘stop at the intersection’. For an automated car trying to navigate busy city streets, this is an extremely daunting task. It has to be able to not just recognize the symbol but to recognize its context. This means taking into account where the symbol is located, its size and the other factors that affect the temporary context. If the vehicle’s AI can’t sort out the context and make a correct judgement as to whether or not to stop the vehicle or wash hands before returning to work then it isn’t all that great. 
Imagine another example. If I give the middle finger to someone, it could be interpreted in a number of ways. One is the obvious, ‘go away, I don’t like you’, another is that it could be humorous. Another could be simply that the finger in question hurts and it’s being held up to display a bruise or cut. We are able to intuit the context of the situation and interpret it accordingly. However, missing just one piece of that will lead to differing interpretations with potentially dangerous results.
Building programs capable of making even these very simple kinds of distinctions is more difficult than it might sound. This is because you can’t literally program every single variable into the software. At some point, your AI software will have to be able to truly function on its own. And to get there, it has to train.
Think of training a dog. When you teach a dog to sit, does it hear the world ‘sit’ and understand its meaning and act accordingly? No. What is going on is that the dog recognizes the word but also is able to understand the context of the command to ‘sit’, such as the tone of voice used, a light push to sit, and even facial expressions. All of that factors into understanding the simple meaning of a simple world. 
If it is that hard to explain how a dog goes about responding to the command to sit, or there is so much to consider in a simple and common hand gesture how much harder is it going to be to get an AI to explain the level of symbolism used in Dante? The answer, virtually impossible.
Fortunately, we don’t need these programs to do the impossible, we just need them to do a little better than the dog. The truth is, that will be hard enough, hard, but doable at least. The AI will need to be taught how to recognize many different symbols before it finally ‘learns’ how to do so and no longer needs to be trained. 
New methods of doing this very thing are being tested right now. What we hope happens is that the complexities of all these systems are understood by the programmers involved. Whether or not they do keep that complexity in mind is the difference between teaching these programs to control through making decisions for us, or programming them to learn and to teach in order to help us make better decisions for ourselves. 
What’s your data worth? www.tartle.co

18 min

Top Podcasts In Technology

No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Acquired
Ben Gilbert and David Rosenthal
TED Radio Hour
NPR
Hard Fork
The New York Times