17 min

Generative AI: More Human Than Human...‪?‬ Content, Robots, and Rock & Roll

    • Marketing

Generative AI tools like ChatGPT mimic the processes of the human brain on a less sophisticated and dynamic level than the actual human brain. In this article, we explore and why it may be harmful to do so. Listen to the podcast to find out about: 


How generative AI tools like ChatGPT work (at a very high level)



How pattern recognition in the human mind leads to the personification of AI tools.



The potentially negative implications of personifying digital tools like ChatGPT


Finding Lemmy On My Shed
Several years ago, I noticed something curious on the side of my shed. The pattern of chipped and worn-away paint on the corner of one wall bore a striking resemblance to heavy metal icon, Lemmy Kilmister. That's weird, right?

Logically, I knew that my shed, a repurposed 100-year-old chicken coop on my family's farm, didn't really contain the essence of the deceased lead singer of one of the greatest metal bands: Motörhead.

That didn't stop me from giving a nod to ol' Lemmy as I rode by on the lawn mower, or walked past on the way to retrieve something from the shed.



When that part of the shed was in need of maintenance, I gave more than a moment's thought about how to complete repairs and still preserve "Shed Lemmy."

Recognizing Familiar Patterns
The human brain has evolved to recognize familiar patterns in things. It's called pareidolia, and it often takes the form of recognizing faces in headlights, power outlets, burnt toast, and even the corner of a shed.

How does this phenomenon of pattern recognition apply to ChatGPT and other generative AI tools that are taking the world by storm in 2023? It applies because people are recognizing patterns of humanity in these AI tools. Is that a good thing? Let's explore.

Being Kind to Robots
A recent post I saw on LinkedIn talked about how the poster would use words like "please" and "thank you" when interacting with generative AI tools. The reason given for this politeness was that decorum, politeness, and humanity are important in every interaction we make. If we dehumanize our interaction with AI, how will we treat the people we communicate with?

While I understand the intent and sentiment of treating our fellow humans with dignity and respect, there's one significant flaw. AI tools aren't humans, and at this point, they don't really even get that close.

 

See the complete post with every detail at superspacerobot.com.

 


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit superspacerobot.substack.com

Generative AI tools like ChatGPT mimic the processes of the human brain on a less sophisticated and dynamic level than the actual human brain. In this article, we explore and why it may be harmful to do so. Listen to the podcast to find out about: 


How generative AI tools like ChatGPT work (at a very high level)



How pattern recognition in the human mind leads to the personification of AI tools.



The potentially negative implications of personifying digital tools like ChatGPT


Finding Lemmy On My Shed
Several years ago, I noticed something curious on the side of my shed. The pattern of chipped and worn-away paint on the corner of one wall bore a striking resemblance to heavy metal icon, Lemmy Kilmister. That's weird, right?

Logically, I knew that my shed, a repurposed 100-year-old chicken coop on my family's farm, didn't really contain the essence of the deceased lead singer of one of the greatest metal bands: Motörhead.

That didn't stop me from giving a nod to ol' Lemmy as I rode by on the lawn mower, or walked past on the way to retrieve something from the shed.



When that part of the shed was in need of maintenance, I gave more than a moment's thought about how to complete repairs and still preserve "Shed Lemmy."

Recognizing Familiar Patterns
The human brain has evolved to recognize familiar patterns in things. It's called pareidolia, and it often takes the form of recognizing faces in headlights, power outlets, burnt toast, and even the corner of a shed.

How does this phenomenon of pattern recognition apply to ChatGPT and other generative AI tools that are taking the world by storm in 2023? It applies because people are recognizing patterns of humanity in these AI tools. Is that a good thing? Let's explore.

Being Kind to Robots
A recent post I saw on LinkedIn talked about how the poster would use words like "please" and "thank you" when interacting with generative AI tools. The reason given for this politeness was that decorum, politeness, and humanity are important in every interaction we make. If we dehumanize our interaction with AI, how will we treat the people we communicate with?

While I understand the intent and sentiment of treating our fellow humans with dignity and respect, there's one significant flaw. AI tools aren't humans, and at this point, they don't really even get that close.

 

See the complete post with every detail at superspacerobot.com.

 


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit superspacerobot.substack.com

17 min