Why chatbots sometimes act weird and spout nonsense

andrea  de santis robot  ai
(Photo: Unsplash)
Microsoft released a new version of its Bing search engine last week, and unlike an ordinary search engine it includes a chatbot that can answer questions in clear, concise prose.اضافة اعلان

Since then, people have noticed that some of what the Bing chatbot generates is inaccurate, misleading, and downright weird, prompting fears that it has become sentient, or aware of the world around it.

That is not the case. And to understand why, it is important to know how chatbots really work.

Is the chatbot alive?No. Let us say that again: No!

In June, a Google engineer, Blake Lemoine, claimed that similar chatbot technology being tested inside Google was sentient. That is false. Chatbots are not conscious and are not intelligent — at least not in the way humans are intelligent.

Why does it seem alive then?Let us step back. The Bing chatbot is powered by a kind of artificial intelligence called a neural network. That may sound like a computerized brain, but the term is misleading.

A neural network is just a mathematical system that learns skills by analyzing vast amounts of digital data. As a neural network examines thousands of cat photos, for instance, it can learn to recognize a cat.

Most people use neural networks every day. It is the technology that identifies people, pets and other objects in images posted to internet services like Google Photos. It allows Siri and Alexa, the talking voice assistants from Apple and Amazon, to recognize the words you speak. And it’s what translates between English and Spanish on services like Google Translate.

Neural networks are very good at mimicking the way humans use language. And that can mislead us into thinking the technology is more powerful than it really is.

How exactly do neural networks mimic human language?About five years ago, researchers at companies like Google and OpenAI, a San Francisco startup that recently released the popular ChatGPT chatbot, began building neural networks that learned from enormous amounts of digital text, including books, Wikipedia articles, chat logs and all sorts of other stuff posted to the internet.

These neural networks are known as large language models. They are able to use those mounds of data to build what you might call a mathematical map of human language. Using this map, the neural networks can perform many tasks, like writing their own tweets, composing speeches, generating computer programs and, yes, having a conversation.

These large language models have proved useful. Microsoft offers a tool, Copilot, which is built on a large language model and can suggest the next line of code as computer programmers build software apps, in much the way that autocomplete tools suggest the next word as you type texts or emails.

Other companies offer similar technology that can generate marketing materials, emails and other text. This kind of technology is also known as generative AI.

Now companies are rolling out versions of this that you can chat with?Exactly. In November, OpenAI released ChatGPT, the first time that the general public got a taste of this. People were amazed — and rightly so.

These chatbots do not chat exactly like a human, but they often seem to. They can also write term papers and poetry and riff on almost any subject thrown their way.

Why do they get stuff wrong?Because they learn from the internet. Think about how much misinformation and other garbage is on the web.

These systems also do not repeat what is on the internet word for word. Drawing on what they have learned, they produce new text on their own, in what AI researchers call a “hallucination.”

This is why the chatbots may give you different answers if you ask the same question twice. They will say anything, whether it is based on reality or not.

If chatbots ‘hallucinate,’ does that not make them sentient?AI researchers love to use terms that make these systems seem human. But hallucinate is just a catchy term for “they make stuff up”.

That sounds creepy and dangerous, but it does not mean the technology is somehow alive or aware of its surroundings. It is just generating text using patterns that it found on the internet. In many cases, it mixes and matches patterns in surprising and disturbing ways. But it is not aware of what it is doing. It cannot reason like humans can.

Can companies not stop the chatbots from acting strange?They are trying.

With ChatGPT, OpenAI tried controlling the technology’s behavior. As a small group of people privately tested the system, OpenAI asked them to rate its responses. Were they useful? Were they truthful? Then OpenAI used these ratings to hone the system and more carefully define what it would and would not do.

But such techniques are not perfect. Scientists today do not know how to build systems that are completely truthful. They can limit the inaccuracies and the weirdness, but they can’t stop them. One of the ways to rein in the odd behaviors is keeping the chats short.

But chatbots will still spew things that are not true. And as other companies begin deploying these kinds of bots, not everyone will be good about controlling what they can and cannot do.

The bottom line: Do not believe everything a chatbot tells you.


Read more Technology
Jordan News