It wasn’t long ago that most people thought of AI as something that existed only in works of fiction. The technology has advanced at a remarkable rate in recent years, and we appear to be on the cusp of an AI-powered revolution that will change the face of countless industries forever.
However, that’s not to say AI is not without its problems. Some of the recent AI systems that have gained enormous popularity in recent months have demonstrated an ability to ‘hallucinate’ and make up facts out of thin air. Why does this happen? Is there any way to stop AI from telling us lies? Let’s find out.
Table of Contents
Why Does AI Lie?
When we talk about an AI producing false claims, we’re usually discussing a large language model (LLM) AI system, such as OpenAI’s ChatGPT or Google Bard.
AI can lie or hallucinate in different ways. Sometimes, AI can produce inaccurate or contradictory statements. For example, if you asked AI to name three European countries, it might say “France, Germany, and China”. Another example could be an AI responding with “Green” when you ask what colour the sky is.
A famous real-world example was when Google Bard made a false statement about the James Webb Space Telescope – embarrassingly during a public demonstration of the new software.
Why does this happen? Who is responsible when AI lies? These are difficult questions to answer, and much of it depends on the particular type of falsehood an AI is providing. Often, it can be the result of the wording of a prompt, which may have confused the system. It can be the result of the system’s programming, either human error on the part of the designers or bias programmed into the system.
What are the Implications?
For those using AI casually or for fun, these falsehoods and hallucinations can be a source of amusement. AI chatbots can sometimes respond in bizarre, often humorous ways, and are an endless source of fun for people using the software in their spare time.
However, AI is increasingly being used in professional contexts, and it looks set to completely change processes across a wide range of industries, from law to medicine and from marketing to journalism.
In these situations, falsehoods can be extremely disruptive, even dangerous. We are putting our trust in this technology, so it’s vital that we are confident that the information AI platforms provide us is accurate and truthful.
What is the Solution?
If we want to see true mainstream adoption of AI systems, these falsehoods and hallucinations must be eradicated. How can this be done?
AI can learn and be trained. ChatGPT designer OpenAI has detailed that it plans to restructure how it trains its systems. Rather than rewarding the AI for giving a correct answer, it will instead reward it for each correct step it takes as it calculates the answer. This should help optimise the system’s reasoning process and help stop falsehoods from occurring in the future.
Conclusion
AI is the most exciting topic in the world today. However, if we want to see it truly transform our lives, developers must find a way to stop AI systems from hallucinating.