Skip to content

Re-understanding AI chatBOTs.

Thanks to an astute colleague1, I have a new understanding of chatGPT and its chatBOT, natural language generative relatives (sisters by different parents). This AI stuff isn’t as smart as I thought. 

I was under the impression that chatGPT produced answers to queries based on being able to eloquently parse information from a bucket of sources. It’s advantage was the ability to write like people. 

I was wrong. The likes of chatGPT predicts what is a good answer to a question based on how questions are answered and the information it has available to it. That’s the ‘Generative’ part in the GPT.

This is how I understand it:

Answering a question like ‘what will the weather be tomorrow?’ can be based on a few, easily anticipated parameters:

  • It is highly unlikely the weather will go from -12 to hot, sunny and humid, especially in northern Canada, in February.
  • There are only so many choices. It will be: cloudy or sunny, rain, snow or no precipitation, a narrow range of temperatures between +30 ish and -30 ish (oC), humid or dry. 
  • Seldom does it rain fire, frogs or heirloom grapes, i.e. inappropriate answers are easy to identify.

To come up with a prediction, drawing from this slate of possibilities makes sense, if you are an AI programmed to answer based on tradition, not new information. Thus, a reasonable response to the question of what the weather will be tomorrow, for where and when I currently am, could be:

  • Cloudy, with a 30% chance of snow flurries. High of -5, low of -12.

An equally logical answer, based on natural language, could be:

  • Sunny, with a chance of rain. Foggy in the morning. High of -5, low of -12. 

But, it doesn’t matter where you are, or what units the temperature is in, rain at -5 is unlikely/physically impossible.

Also, chatGPT seems to apply its natural language processing skills to explaining where it gets its information. Thus, it might suggest it got the weather information from the Canadian Environment Ministry, which doesn’t exist, at least by that name. But it is a plausible name of an agency that provides meteorological information in Canada.

Remember the old cliche or satire (most correctly, theorem), about the infinite number of monkeys sitting at typewriters2, producing the world’s greatest story? The idea was that given enough time, random tapping on keys would produce a series of letters that made amazing sense to humanity. This model could apply to how chatGPT is operating, except it’s using words instead of letters: put enough logical words together and some combination is bound to be insightful.

The consequences of how natural language AI’s work is that they make things up. Because they are projecting what a good combination of words is to answer any given question, sometimes the output is pure fiction -when it isn’t supposed to be fiction, because they do write fiction. 

An example: a while ago, chatGPT stated that Elon Musk was CEO of Twitter in 2021.3 Most of us know that’s wrong, but if it was about a company we’d never heard of and a relatively unknown person, we’d be inclined to accept the AI’s declaration as correct. And it would be, some of the time. 

To me, the most bizarre thing chatGPT makes up is where it gets its information. Academic, journalistic and legal approaches insist on identifying the source of information the writer is using, if it doesn’t come straight out of the writer’s imagination. Now that I understand the Generative part of chatGPT, I suppose it could be said all chatGPT output is right out of its imagination. 

I find this a hysterical😂 parody of curating information. The most sacred, honoured system is to attribute the source of information. It provides credit where credit is due. It provides credibility that the information is valid. It upholds all legal and ethical standards to represent the work of the worker. And chatGPT makes it up – suggesting as long as there is a reference, all duty has been done. 

Reminds me of a recent social trend suggesting fact is no longer the dominant determinant of the reality. At some point, opinion – the collective vote of the masses – became more important than the truth.

And now it appears AI is using this principle to demonstrate that as long as information sounds and looks good, it is good. Never mind the accuracy, original author, who put the work into finding the information or if it every really occurred.

I know we are in the early days of development of this technology, and it will improve as time goes on. Because humans are engineering it.

Until then, I believe: Elon Musk was the CEO of Twitter two years ago. That’s why there were 13 full moons in 2022. The Society for Prevention of Cruelty to Senior Management has a white paper on it. 


1 Thanks to Allyson Miller, TMU, who skillfully presented the workshop “Artificial Intelligence and Student Assessment: Promoting Academic Integrity”

2 A typewriter is a manual version of a keyboard that produces text on a piece of paper rather than the screen.

3 I would love to recognize the person whose tweet I initially read on Twitter, but can no longer find it. Searching on Twitter provides many similar tweets. And that chatGPT now knows who really was CEO of Twitter was in 2021.

Thanks for reading.

If you'd like more,

sign up to receive awesome content in your inbox, every week-ish.

Signing up is only for updates when new blog posts are added to my site. If you want marketing spam, you'll have to look elsewhere, but, really, anywhere will likely do 😎

Share this post, if you like...

Leave a Reply

Your email address will not be published. Required fields are marked *