top of page
Gen-AI Employee Support & Automation Platform

Innovative Algorithm Detects Generative AI Hallucinations


Generative AI, known for producing impressive outputs, often needs to improve: it can confidently provide incorrect answers. This issue, commonly referred to as "hallucination," poses serious risks, mainly when users rely on AI for critical information like medical advice or legal precedents.


A recent investigation by Wired highlighted that AI-powered search engines, such as Perplexity, frequently generate inaccurate answers. This revelation underscores the urgent need for reliable methods to detect and mitigate these errors.


Researchers at the University of Oxford have developed an innovative method to identify "arbitrary and incorrect answers," termed confabulations. Unlike traditional methods focusing on word similarity, this new approach evaluates the semantic meaning of responses to determine their accuracy.


1. Multiple Queries: The chatbot is asked the same question multiple times, e.g., "Where is the Eiffel Tower?"

2. Response Grouping: A separate large language model (LLM) groups the responses based on their meaning, such as "Paris," "France's capital Paris," and "Berlin."

3. Semantic Entropy Calculation: The algorithm calculates the "semantic entropy" for each group, measuring the similarity among responses. High semantic entropy indicates a higher likelihood of confabulation.


This method boasts an accuracy of 79% in detecting confabulations, outperforming traditional word-based detection measures.


While promising, this approach has its limitations. It primarily detects inconsistent errors and requires significantly more computing power—about five to ten times more than typical chatbot interactions. Additionally, it cannot address biases or errors introduced by training on flawed data.


Experts emphasize the need for AI systems to communicate their uncertainty effectively. By acknowledging the limitations and uncertainties in their responses, AI systems can foster user trust and reduce overreliance. For instance, expressions like "I'm not sure, but..." can help users gauge the reliability of AI-generated information.


The most advanced chatbots, including those from OpenAI, Meta, and Google, still exhibit 2.5% and 5% hallucination rates when summarizing documents. Although improvements are being made, ensuring AI accuracy remains a moving target. Enhancing training data can improve accuracy for specific queries, but AI's broader application often stretches beyond its training, leading to potential errors.


Addressing AI hallucinations requires a multifaceted approach, combining advanced algorithms with transparent uncertainty communication. As AI continues integrating into high-stakes areas, ensuring its reliability and accuracy is paramount. This new algorithm represents a significant step toward building more trustworthy AI systems, but ongoing research and development are essential to keep pace with the evolving demands of AI applications.

Comments


bottom of page