AI on Australian travel company website sent tourists to nonexistent hot springs

2026-01-2910:1511658www.cnn.com

An AI-generated blog on a tour company’s website has landed tourists in hot water — and not the kind they were looking for — after it emerged the “tranquil” northern Tasmania hot springs it…

An AI-generated blog on a tour company’s website has landed tourists in hot water — and not the kind they were looking for — after it emerged the “tranquil” northern Tasmania hot springs it recommended don’t actually exist.

Screenshots shared with CNN of the now-deleted blog on Tasmania Tours’ website show recommendations for “Weldborough Hot Springs,” said to offer “a peaceful escape” in the forests of northeast Tasmania. Described as a “tranquil haven,” the site was touted as a “favourite” among hikers.

Weldborough is a small rural town about 110 kilometers (68 miles) from the city of Launceston.

Scott Hennessey, the owner of the New South Wales-based Australian Tours and Cruises, which operates Tasmania Tours, told the Australian Broadcasting Network (ABC) earlier this month that “our AI has messed up completely.”

The company had outsourced its marketing material to a third party, he said, and, while he normally reviews each post, the blog was published while he was out of the country.

“We’re trying to compete with the big boys,” Hennessy told ABC. “Part of that is you’ve got to keep your content refreshed and new all of the time.”

“We’re not a scam,” he continued. “We’re a married couple trying to do the right thing by people … We are legit, we are real people, we employ sales staff.”

Australian Tours and Cruises told CNN Tuesday that “the online hate and damage to our business reputation has been absolutely soul-destroying.”

“We are just trying to get on with our lives and put the whole thing behind us,” the company continued.

Kristy Probert, owner of the local Weldborough Hotel, told CNN she was confused when tourists started asking her questions about the hot springs in September.

“It was only a couple of calls to start with,” Probert said, “but then people began turning up in droves. I was receiving probably five phone calls a day, and at least two to three people arriving at the hotel looking for them. We’re in a very remote location so it was very random.”

Probert said she would respond each time: “If you can find these hot springs, beers are on me.”

The local Weld River is “freezing,” said Probert, and typically only occupied by prospectors searching for sapphire and tin. “They wear wetsuits,” she added. “There’s a sauna in a nearby town. I guess you could jump into the freezing river after you’ve been over there.”

The owner of 250-year-old Weldborough Hotel was astonished when tourists started turning up and asking for directions to local hot springs.

Anne Hardy, adjunct professor in tourism at Southern Cross University, Australia, told CNN that AI had become “ubiquitous in travel and tourism,” adding that about 37% of tourists use AI for travel advice or itineraries.

“Tourists trust AI more than review sites,” she said, adding that tour operators employ AI not only for blogs and marketing materials, but for itineraries and costing.

“AI can be extremely helpful,” Hardy said. “It saves time and ultimately money.”

However, she warned that it can also create inaccuracies or “hallucinations,” like the Weldborough “hot springs.”

According to Hardy, empirical tourism research suggests “90% of itineraries that AI generates have mistakes in them.”

She said this could have “extremely dangerous implications,” pointing to Tasmania’s remote walks with no services or cell coverage as an example.

“I have witnessed many cases where AI has made suggestions for day walks, which are very inaccurate, ranging from the length of the walk, its difficulty level, or weather conditions,” she told CNN.

Hardy recommends travelers go beyond AI and do their own research.

“Use trusted guidebooks, travel agents and review websites,” she said. “Plus ask concierges and your hosts to assess whether AI itineraries are accurate, if you do choose to use them.”

Probert added that she feels sorry for the owners of Tasmania Tours, who she says she’s spoken with over the phone.

“It is hard to keep everything up to date and relevant as a small business,” she said. “They seem like lovely people, and we’ve all made mistakes. I think this was quite a funny one.”

“There is plenty to do in Weldborough,” she assured visitors. “Just no hot springs.”


Read the original article

Comments

  • By 0xC0ncord 2026-01-2913:329 reply

    >Scott Hennessey, the owner of the New South Wales-based Australian Tours and Cruises, which operates Tasmania Tours, told the Australian Broadcasting Network (ABC) earlier this month that “our AI has messed up completely.”

    To me this is the real takeaway for a lot of these uses of AI. You can put in practically zero effort and get a product. Then, when that product flops or even actively screws over your customers, just blame the AI!

    No one is admitting it but AI is one of the easiest ways to shift blame. Companies have been doing this ever since they went digital. Ever heard of "a glitch in the system"? Well, now with AI you can have as many of those as you want, STILL never accept responsibility, and if you look to your left and right, everyone is doing it, and no one is paying the price.

    • By benjedwards 2026-01-2915:041 reply

      Yes, it's a big problem. I call it "agency laundering" and I first mentioned it in this article last year: https://arstechnica.com/information-technology/2025/08/is-ai...

      Treating AI models as autonomous minds lets companies shift responsibility for tech failures.

      • By clarkmoody 2026-01-2916:321 reply

        Wait until your local police force has fully autonomous lethal robots on the streets.

        • By TeMPOraL 2026-01-2921:06

          This one isn't actually inevitable in the near term. Lethal robots policing the streets isn't something that can just sneak up on us[0] - it's a pretty clear-cut civic issue affecting everyone, so excepting hardcore autocracies with no vertical accountability[1], the public can push such ideas back indefinitely[2].

          It's hard to "agency launder" a killer robot when it's physically patrolling a public square.

          --

          [0] - Except maybe through privatization of law enforcement, which could be more gradual - think police outsourcing more work to private security companies, which in turn decide to "pioneer innovative solutions to ensure personal safety" by giving weapons to mall security patrol robots and putting them out on the streets - but it'll still be pretty obvious what's happening.

          [1] - Some cursory search suggests this is the correct term for the idea I'm thinking of, which is how much the people in power have to, in practice, take their subjects' reactions into account.

          [2] - Well, at least until armed forces of multiple countries start using autonomous robots as ground infantry, and over the years, normalize this idea in the minds of civilians.

    • By flakeoil 2026-01-2915:43

      > No one is admitting it but AI is one of the easiest ways to shift blame.

      Similar to what Facebook, Google, Twitter/X, Tiktok etc have been doing for a long time using the platform-excuse. "We are just a platform. We are not to blame for all this illegal or repugnant content. We do not have resources to remove it."

    • By pjc50 2026-01-2914:211 reply

      There's a book "The Unaccountability Machine" that HN may be interested in. Takes a much broader approach across management systems.

      • By TeMPOraL 2026-01-2921:37

        That famous Bible verse, "there is nothing new under the sun", comes to mind. Even most of the problems with computers and computer systems - especially distributed ones - and information processing, and all problems at the interface layer between those systems and people, are something we've already been dealing with for hundreds of years. For many of those we even developed effective solutions, that most people don't realize exist.

        It takes a little frame shift to see this: one has to realize that bureaucracy is a computing system, built on a runtime made of people instead of silicon, storing data on forms and documents, invoking procedure calls through paper shuffling, executing programs written in legalese, as rules and procedures and laws.

        Accountability shifting? "The program won't let me do that" is just a new, more intense flavor of "this is the company/government policy". The underlying goals remain the same - building a reliable system from unreliable parts, a system to realize some goals - while maintaining control of and visibility into it, all without having to personally micromanage every aspect. Introductions of computers into bureaucracy didn't change its fundamental nature; making process more robust and reducing endpoint variation (i.e. individual autonomy of the workers) just makes it scale better.

        Hell, even AI - at least at this point[0] - isn't really a new thing either. Once you allow yourself to anthropomorphize LLMs a bit and realize they are effectively "People on a Chip", it becomes clear what their role in a computing system is, and that we already have experience dealing with their flaky, unreliable nature.

        And from that perspective, it's clear as day that company blaming AI for a fuckup is just the most recent flavor of shifting blame to a subcontractor.

        --

        [0] - Things will meaningfully change if and when we get to the point of AIs being given moral or legal status as people. Though in all honesty, this wouldn't be a completely new situation either - more like a new take on social and political issues humanity has been dealing with ever since first two ancient tribes found themselves contesting the same piece of land.

    • By yojo 2026-01-2915:142 reply

      It sounds like in this case there was some troll-fueled comeuppance.

      > “We’re not a scam,” he continued. “We’re a married couple trying to do the right thing by people … We are legit, we are real people, we employ sales staff.”

      > Australian Tours and Cruises told CNN Tuesday that “the online hate and damage to our business reputation has been absolutely soul-destroying.”

      This might just be BS, but at face-value, this is a mom and pop shop that screwed up playing the SEO game and are getting raked over the internet coals.

      Your broader point about blame-washing stands though.

      • By ambicapter 2026-01-2915:39

        That's the thing about scammers, they operate in plausibly deniable ways, like covering up malice with incompetence. They make taking things at face value increasingly costly for the aggrieved.

      • By scblock 2026-01-2915:49

        No, this is earned. They chose to do this, to publish lies, and have to live with the consequences.

    • By stuaxo 2026-01-2915:24

      Commercial enterprises seem designed to launder responsibility, this is perhaps the ultimate version of that system.

    • By CaptainZapp 2026-01-3013:51

    • By ehnto 2026-01-2913:432 reply

      I somewhat disagree, because at the end of the day he still has to take responsibility for the fuckup and that will matter in terms of dollars and reputation. I think this is also why a lot of roles just won't speed up that much, the bottleneck will be verification of outputs because it is still the human's job on the line.

      An on the nose example would be, if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong? Customers are going to feel the same way, AI or human, you (the company, the employee) messed up.

      • By caminante 2026-01-2914:20

        > dollars and reputation

        You're not already numb to data breaches and token $0.72 class action payouts that require additional paperwork to claim?

        In this article, these people did zero confirmatory diligence and got an afternoon side trip out of it. There are worse outcomes.

      • By add-sub-mul-div 2026-01-2914:19

        > if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong?

        He was likely the one who ordered the use of the AI. He won't fire you for mistakes in using it because it's a step on the path towards obsoleting your position altogether or replacing you with fungible minimum wage labor to babysit the AI. These mistakes are an investment in that process.

        He doesn't have to worry about consequences in the short term because all the other companies are making the same mistakes and customers are accepting the slop labor because they have no choice.

    • By vivzkestrel 2026-01-2917:13

    • By nicbou 2026-01-2914:441 reply

      I hope that this will result in people paying a premium for human curation and accountability, but I won't hold my breath.

      • By TeMPOraL 2026-01-2922:10

        I imagine it's already happening, but not at price points most of us would ever afford.

        I.e. I'm not really going to pay lots of money to, say, 1) find a doctor that does not use AI as part of their work, and 2) legally/contractually enforce this is the case. However, I can imagine a government agency or a large company contracting out to some think tank or research organization, and paying through the nose to get a legally binding guarantee that no AI will be used as part of that work.

  • By pjc50 2026-01-2912:544 reply

    New variant on "I followed my satnav blindly and now I'm stuck in the river", except less reliable.

    It is however fraud on the part of the travel company to advertise something that doesn't exist. Another form of externalized cost of AI.

    • By buran77 2026-01-2913:283 reply

      > It is however fraud on the part of the travel company to advertise something that doesn't exist

      Just here to point out that from a legal perspective, fraud is deliberate deception.

      In this case a tourist agency outsourced the creation of their marketing material to a company who used AI to produce it, with hallucinations. From the article it doesn't look like either of the two companies advertised the details knowing they're wrong, or had the intent to deceive.

      Posting wrong details on a blog out of carelessness and without deliberate ill intention is not fraud more than using a wrong definition of fraud is fraud.

      • By TeMPOraL 2026-01-2922:21

        > Posting wrong details on a blog out of carelessness and without deliberate ill intention is not fraud more than using a wrong definition of fraud is fraud.

        There's a concept of "constructive fraud", for cases where there was no deliberate intent to deceive, but the degree of negligence was so great that the fraudlent-looking outcome can just be considered fraud.

      • By tantalor 2026-01-2914:041 reply

        The standard is to add disclaimers like "Al responses may include mistakes." The chatbot they used to generate that text would have mentioned that.

        Everybody knows AI makes stuff up. It's common knowledge.

        To omit that disclaimer, the author needs to take responsibility for fact checking anything they post.

        Skipping that step, or leaving out the disclaimer, is not carelessness, it is willful misrepresentation.

        • By buran77 2026-01-2915:531 reply

          > To omit that disclaimer, the author needs to take responsibility for fact checking anything they post.

          > Skipping that step, or leaving out the disclaimer, is not carelessness, it is willful misrepresentation.

          Couldn't help but notice you gave some very convincing legal advice without any disclaimer that you are not a lawyer, a judge, or an expert on Australian law. Your own litmus test characterizes you as a fraudster. The other mandatory components of fraud (knowledge, intention, damages) don't even apply, you said so.

          Australian law isn't at all weird about this. Their definition (simplified) pivots on intentional deception, to obtain gains or to cause loss to others, knowing the outcome.

      • By f33d5173 2026-01-2914:182 reply

        There has to be a clause for "willful disregard for the truth", no? Having your lying machine come up with plausible lies for you and publishing them without verification is no better than coming up with the lies yourself. What really protects them from fraud accusations is that these blog posts were just content marketing, they weren't making money off of them directly.

        • By buran77 2026-01-2916:16

          Even for civil law where the bar for the evidence is lower, it's hard to make a case that someone who posted wrong details on a free blog and didn't make money off of it should cover the damages you incurred by traveling based on the advice alone. Not making any reasonable effort to fact check cuts both ways.

          This is a matter of contract law between the two companies, but the people who randomly read an internet blog, took everything for granted, and more importantly didn't use that travel agency's services can't really claim fraud.

          Just being wrong or making mistakes isn't fraud. Otherwise 99% of people saying something on the internet would be on the hook for damages again and again.

        • By direwolf20 2026-01-2915:05

          And using autocomplete to write travel advertisements has to fall under this category?

    • By Lerc 2026-01-2913:27

      Seems like closer to fraud on behalf of the marketing company they outsourced to.

      I doubt they commissioned articles on things that don't exist. If you use AI to perform a task that someone has asked you to do, it should be your responsibility to ensure that it has actually done that thing properly.

    • By alpinisme 2026-01-2913:10

      The consequences for wrong ai need to be a lot higher if we want to limit slop. Of course, there’s space for llms and their hallucinations to contribute meaningful things, but we need at least a screaming all caps disclaimer on content that looks like it could be human-generated but wasn’t (and absent that disclaimer or if the disclaimer was insufficiently prominent, false statements are treated as deliberate fraud)

  • By doodpants 2026-01-2914:022 reply

    “our AI has messed up completely.”

    No, it worked as designed. Generative AI simply creates content of the type that you specify, but has no concept of truth or facts.

    • By idopmstuff 2026-01-2916:28

      I find takes like this very strange. Whether or not it gives the correct information, it's clearly not designed to give false information to factual queries.

      The design of it is based on the intention of the people creating it, not the actual outcome, and it's pretty clear from all available information, plus a general understanding of incentives, that it's designed to be as accurate as possible, even if it does make errors.

    • By simianwords 2026-01-2916:061 reply

      this is incorrect. it has the concept of truth and facts.

      • By usefulcat 2026-01-2916:151 reply

        How is knowing what word is most likely to come next in a series of words remotely the same as having "the concept of truth and facts"?

        • By simianwords 2026-01-2917:072 reply

          how would you prove that a human has it?

          • By imtringued 2026-01-308:081 reply

            Humans update their model of the world as they receive new information.

            LLMs have static weights, therefore they cannot not have a concept of truth. If the world changes, they insist on the information that was in their training data. There is nothing that forces an LLM to follow reality.

            • By simianwords 2026-01-308:49

              what about a person with short term memory?

          • By usefulcat 2026-01-2917:131 reply

            Whataboutism is almost never a compelling argument, and this case is no exception.

            ETA:

            To elaborate a bit: based on your response, it seems like you don't think my question is a valid one.

            If you don't think it's a valid question, I'm curious to know why not.

            If you do think it's a valid question, I'm curious to know your answer.

            • By simianwords 2026-01-2919:041 reply

              its not whataboutism, i'm simply asking how you would perform the same test for a human. then we can see if it applies or not to chatgpt?

              • By usefulcat 2026-01-2919:511 reply

                I don't know. What is your answer to my question?

                • By simianwords 2026-01-2920:21

                  Knowing which word is likely to come after the other is trivially the concept of knowing truth for me.

                  Why not? We have optimised for truth and we are predicting the best words that ensure this optimal value.

HackerNews