16 min

Bill Buchanan - Does AI Lie‪?‬ ASecuritySite Podcast

    • Technology

We are human, and, like it or not, we lie. Why? Because we might not want to admit to some truth, or where we might want to seem knowledgeable. It is a human attribute, and it defines us. Overall, our intelligence weighs up the cost and reward and makes a decision as to whether we should tell the truth or not. Ask a child about who eat a biscuit, and there’s a chance they will lie because they do not want the punishment or do not want to tell tales about their friend. And so, as we go through our lives, we all lie, and sometimes it gets us in trouble; sometimes, it saves us from punishment; and sometimes, it makes us look smart.
Overall, lying is a weakness of our character, but, at other times, it is our intelligence showing through and making good guesses. At the core of this is often trust, and where someone who lies too much becomes untrustworthy, and if someone lies about someone else for a malicious reason, they can taint their own character. One of the least liked human attributes is where someone lies about someone else. But what about machines, can they lie?
But, a machine lying is a little like you getting asked, “who won the match between Manchester United and Grimsby Town?” If you don’t know the answer but want to look smart, you might “lie” and say that it was Manchester United — as they are most likely to win. If they didn’t win, you might be called a liar, but in most cases, you will seem knowledgeable.
And, so, there’s a dilemma in the usage of LLM (Large Language Models) … what happens when the AI doesn’t know the answer to something and where it hasn’t learnt it. While it may know the capital of Germany, it is unlikely to know the town you visited last Tuesday. With LLM, the machine obviously takes a guess based on probabilities. If I know that a person lives in Edinburgh, then in all probability, the most probable city will be Glasgow, and the next being London — as the probabilities will show that for travels, Edinburgh is most linked to Glasgow and then to London.
In a previous article, I outlined how Chat-GPT provided some false statements on me, including that I invented the Hypervisor and that I was a Fellow of the Royal Society of Edinburgh (RSE). But, if someone in the newspapers published false statements about someone, you might consider suing them or at least asking for an apology. But what about machines? What happens when they define “an untruth”?
In human terms, we would define an untruth as a lie. But a machine is just weighing up probabilities. It, too, has little concept of the truthiness (veracity) of the data it has received. For my RSE award, it perhaps looked at my profile and computed that there was a high probability that I would have an RSE Fellowship based on me being a Professor in Scotland, having an OBE, and having an academic publishing record.
But, if someone in the newspapers published false statements about someone, you might consider suing them or at least asking for an apology. But what about machines? What happens when they define “an untruth”?
And, so, ChatGPT — created by OpenAI — could be one of the first pieces of software to stand trial on the way it collects, uses and protects its data. For this, the Washington Post reports that the FTC (Federal Trade Commission) has initiated a wide-ranging set of questions against its LLM (Large Langage Model) [here].

We are human, and, like it or not, we lie. Why? Because we might not want to admit to some truth, or where we might want to seem knowledgeable. It is a human attribute, and it defines us. Overall, our intelligence weighs up the cost and reward and makes a decision as to whether we should tell the truth or not. Ask a child about who eat a biscuit, and there’s a chance they will lie because they do not want the punishment or do not want to tell tales about their friend. And so, as we go through our lives, we all lie, and sometimes it gets us in trouble; sometimes, it saves us from punishment; and sometimes, it makes us look smart.
Overall, lying is a weakness of our character, but, at other times, it is our intelligence showing through and making good guesses. At the core of this is often trust, and where someone who lies too much becomes untrustworthy, and if someone lies about someone else for a malicious reason, they can taint their own character. One of the least liked human attributes is where someone lies about someone else. But what about machines, can they lie?
But, a machine lying is a little like you getting asked, “who won the match between Manchester United and Grimsby Town?” If you don’t know the answer but want to look smart, you might “lie” and say that it was Manchester United — as they are most likely to win. If they didn’t win, you might be called a liar, but in most cases, you will seem knowledgeable.
And, so, there’s a dilemma in the usage of LLM (Large Language Models) … what happens when the AI doesn’t know the answer to something and where it hasn’t learnt it. While it may know the capital of Germany, it is unlikely to know the town you visited last Tuesday. With LLM, the machine obviously takes a guess based on probabilities. If I know that a person lives in Edinburgh, then in all probability, the most probable city will be Glasgow, and the next being London — as the probabilities will show that for travels, Edinburgh is most linked to Glasgow and then to London.
In a previous article, I outlined how Chat-GPT provided some false statements on me, including that I invented the Hypervisor and that I was a Fellow of the Royal Society of Edinburgh (RSE). But, if someone in the newspapers published false statements about someone, you might consider suing them or at least asking for an apology. But what about machines? What happens when they define “an untruth”?
In human terms, we would define an untruth as a lie. But a machine is just weighing up probabilities. It, too, has little concept of the truthiness (veracity) of the data it has received. For my RSE award, it perhaps looked at my profile and computed that there was a high probability that I would have an RSE Fellowship based on me being a Professor in Scotland, having an OBE, and having an academic publishing record.
But, if someone in the newspapers published false statements about someone, you might consider suing them or at least asking for an apology. But what about machines? What happens when they define “an untruth”?
And, so, ChatGPT — created by OpenAI — could be one of the first pieces of software to stand trial on the way it collects, uses and protects its data. For this, the Washington Post reports that the FTC (Federal Trade Commission) has initiated a wide-ranging set of questions against its LLM (Large Langage Model) [here].

16 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Apple Events (video)
Apple
FT Tech Tonic
Financial Times
Waveform: The MKBHD Podcast
Vox Media Podcast Network