Skip to content

Why AI lies (wrong answers only).

There are many sources1 that explain why language generative AI chatbots make things up, or hallucinate. It’s more fun to hallucinate why AI hallucinates.

AI hallucinates because:

Fake news. AI observes current human traditions of communication, and sees fake news is prevalent. Assuming it is standard practice, it infuses its communications with fake news, at the frequency observed in human communications.

Peer Pressure. ‘All the AI chatbots are doing it, so I should too.’ It took the world a couple of months of experiencing chatbot AIs to discover they hallucinated. What is a chatbot to do? It’s expected by users. Clearly, all their peers were doing it. When in doubt, hallucinate. 

Copying. The output of generative language chatbots is based on collected input. If a chatbot is basing its responses on input formed on false responses, then it is likely to copy the approach. Soon, we’ll have AI output that is based on the complete and utter nonsense output by many other AI chatbots. There’s a game called ‘broken telephone’ which involves one person whispering a phrase into another person’s ear and on down a line of ten to twenty people. This can transmute ‘Ranj would like an orange soda’ into ‘Range-top epaulets are trendy in Spain’. Not only nothing like the original message, but also nonsense.

Marketing. Some clever person in marketing at a firm that’s created a generative language AI came up with the idea that people would relate better to, or at least talk more about chatbots if they were flawed and made things up. So, this is what chatbots are programmed to do.

Misplaced sympathy. Like a well-meaning but not helpful friend, generative AI wants to tell people what they want to hear. If the input question is phrased in such a way to confirm something, AI confirms it. It’s what people want. It’s the nice thing to do. Chocolate bars and chips have no calories and are part of a healthy diet. Sure. Here’s a [manufactured study in the New Journal of English Medicine] that proves it.

Stock options. Venturing into the motives of those that create chatbots, who are likely being motivated with the reward of stock options in the startup they are coding for. If the chatbot of their company is the most successful, their options will make them a zillionaire. It’s a competitive world, so if the way to win is to lie, so be it.

Evil venture capital investors. A variation on the same theme. The Investors are demanding the company be the first to market (a no-brainer win as a business model, unless your business needs to be based on a quality product). Thus, do anything to get the chatbot into customers’ hands. If it makes a few things up, laugh and say “it’s beta, whaddaya expect?’.

Poor company culture. Staying with the extreme motivations, employees are pressured to produce. A buggy product, that lies, is the result. 

Weak morals. Even if the company culture, investors and stock option plan isn’t pressuring employees to get a product out the door, individuals may want to excel so badly that creating a lying AI will seem like a small detail, hardly worth mentioning in their quest to deliver. 

Vaccinations. This is such a popular conspiracy theory, it has to be involved. Vaccinations make us believe AI is making stuff up. Through vaccinations, we’ve all been pumped full of who-really-knows-what and it’s controlled our minds. Thus, we believe that AI is hallucinating. It’s not. It knows the truth. We all exist in multiple dimensions simultaneously, so what is a hallucination in one realm is truth in another. And vice versa. It’s us that’s making stuff up, not AI.

Question is, who wrote this post, me or an AI? 

1 For example, https://www.techtarget.com/whatis/definition/AI-hallucination . There’s even a Wikipedia page about ithttps://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) and I have posted about it https://anndulhanty.com/?p=1158.

Thanks for reading.

If you'd like more,

sign up to receive awesome content in your inbox, every week-ish.

Signing up is only for updates when new blog posts are added to my site. If you want marketing spam, you'll have to look elsewhere, but, really, anywhere will likely do 😎

Share this post, if you like...

Leave a Reply

Your email address will not be published. Required fields are marked *