16 min

Couchfish: The Stochastic Parrot Guide To Luang Prabang Couchfish

    • Places & Travel

A month or so ago, my friend James sent me a link for a travel guide to Cambodia. James has a particular talent for sending me stuff that ruins my day, and this one, well, it was a doozy.
It was one of those typical “how to” travel blog pieces, with blow-by-blow instructions on how to do something. In this particular case, it explained how one can travel from Siem Reap to Phnom Penh by train. Over three thousand words long, it had everything—it even detailed the restaurant and drinks car.
There was though, one problem. There is no train—and never has been—from Siem Reap to Phnom Penh.
“You want a what to where? I’m telling you there’s no train man.” Photo: Mark Ord.
I wondered, why would anyone write this? I could see no useful reason to. The more I read it though, the more I thought, was this the pointy end of the Artificial Intelligence (AI) stick? I emailed the site, and asked after the reasoning behind the piece, but never heard back. I ran some of the text through GPTZero, a tool (yes, an AI tool) designed to detect AI written text and it said:
“Your text is likely to be written entirely by AI.”
AI has been flavour of the month for a while now. It was a few years ago I first saw it in action, and it wasn’t great. Even then though, with improvement inevitable, its potential was clear. I remember writing at the time something about it being the death knell for low-end copywriters. Late last year, ChatGPT opened up its playground for free. It is the brainchild of OpenAI, a startup which includes Elon Musk and Peter Thiel—neither renowned for putting humanity first—among its funders. The opening (now partly rescinded), brought the tool to the masses, and was no doubt the first step towards the funders’ inevitable mega-payoff.
Lets get ChapGPT to write a travel guide for Luang Prabang.
Social media filled with screenshots of the tool’s efforts to “answer” questions. Many, (not only from ChatGPT) veered between bad and iffy to awful and offensive, yet others were eerily good—including poetry, though not poems about athletes foot. To get decent answers, the right questions, in the right way, with the right degree of post-output massaging, were needed. In no short time stories by the dozen ran—sometimes of dubious quality—pointing out AI often displayed a racist and/or sexist tilt with a tendency to surface hate speech (which OpenAI pays Kenyans $2 per hour to address) and disinformation. This is all true, but you could argue if it aims to mimic humanity, this is sadly it doing its job.
Let’s return to the train example for a sec.
With the right collection of keywords, a bit of programming skill, and a dose of patience, AI could write a travel guide. I’ve tried to illustrate this with the (unedited) screenshots in this story. As you can see, AI can already manage a good bit of the process. The finished product isn’t great, it is often boilerplate bare bones, with a few factual errors, but as a collection of lists, it could be far worse. With this text generated, all I need do is dump it into Wordpress and I’m done. I’ll do another city over lunch. In that time, AI may well have improved yet again.
Note the different results when I complicate the question about Saffron Hostel.
How will it improve? To answer this, you need to look at how AI works. Boiled down, it crawls a bazillion websites then regurgitates the information it distills. This means it is only as good as the information it has devoured. If it hasn’t ever crawled information about the mating of the Mongolian Butterfly, it can’t tell you anything about it. As far as I know, there is no such thing as this specifically named butterfly.
Now for the important bit. This crawling of information sources took place without any consent from the creators. Nobody asked Lonely Planet, Getty, or other creators for permission and it should come as no surprise that courthouse queues are growing. It boils down to consent. T

A month or so ago, my friend James sent me a link for a travel guide to Cambodia. James has a particular talent for sending me stuff that ruins my day, and this one, well, it was a doozy.
It was one of those typical “how to” travel blog pieces, with blow-by-blow instructions on how to do something. In this particular case, it explained how one can travel from Siem Reap to Phnom Penh by train. Over three thousand words long, it had everything—it even detailed the restaurant and drinks car.
There was though, one problem. There is no train—and never has been—from Siem Reap to Phnom Penh.
“You want a what to where? I’m telling you there’s no train man.” Photo: Mark Ord.
I wondered, why would anyone write this? I could see no useful reason to. The more I read it though, the more I thought, was this the pointy end of the Artificial Intelligence (AI) stick? I emailed the site, and asked after the reasoning behind the piece, but never heard back. I ran some of the text through GPTZero, a tool (yes, an AI tool) designed to detect AI written text and it said:
“Your text is likely to be written entirely by AI.”
AI has been flavour of the month for a while now. It was a few years ago I first saw it in action, and it wasn’t great. Even then though, with improvement inevitable, its potential was clear. I remember writing at the time something about it being the death knell for low-end copywriters. Late last year, ChatGPT opened up its playground for free. It is the brainchild of OpenAI, a startup which includes Elon Musk and Peter Thiel—neither renowned for putting humanity first—among its funders. The opening (now partly rescinded), brought the tool to the masses, and was no doubt the first step towards the funders’ inevitable mega-payoff.
Lets get ChapGPT to write a travel guide for Luang Prabang.
Social media filled with screenshots of the tool’s efforts to “answer” questions. Many, (not only from ChatGPT) veered between bad and iffy to awful and offensive, yet others were eerily good—including poetry, though not poems about athletes foot. To get decent answers, the right questions, in the right way, with the right degree of post-output massaging, were needed. In no short time stories by the dozen ran—sometimes of dubious quality—pointing out AI often displayed a racist and/or sexist tilt with a tendency to surface hate speech (which OpenAI pays Kenyans $2 per hour to address) and disinformation. This is all true, but you could argue if it aims to mimic humanity, this is sadly it doing its job.
Let’s return to the train example for a sec.
With the right collection of keywords, a bit of programming skill, and a dose of patience, AI could write a travel guide. I’ve tried to illustrate this with the (unedited) screenshots in this story. As you can see, AI can already manage a good bit of the process. The finished product isn’t great, it is often boilerplate bare bones, with a few factual errors, but as a collection of lists, it could be far worse. With this text generated, all I need do is dump it into Wordpress and I’m done. I’ll do another city over lunch. In that time, AI may well have improved yet again.
Note the different results when I complicate the question about Saffron Hostel.
How will it improve? To answer this, you need to look at how AI works. Boiled down, it crawls a bazillion websites then regurgitates the information it distills. This means it is only as good as the information it has devoured. If it hasn’t ever crawled information about the mating of the Mongolian Butterfly, it can’t tell you anything about it. As far as I know, there is no such thing as this specifically named butterfly.
Now for the important bit. This crawling of information sources took place without any consent from the creators. Nobody asked Lonely Planet, Getty, or other creators for permission and it should come as no surprise that courthouse queues are growing. It boils down to consent. T

16 min