Bloomfield Free Press promo

Lowering hallucination results with ChatGPT

The advances of AI, specifically models like GPT-4 from OpenAI, have given rise to powerful tools capable of generating human-like text responses. These models are invaluable in myriad contexts, from customer service and support systems to educational tools and content generators. However, these capabilities also present unique challenges, including the generation of ‘hallucination’ results. In AI, hallucinations refer to instances when the model provides information that, although plausible, is not based on fact.

This article outlines strategies for mitigating hallucinations when interacting with GPT-4, ensuring the outputs are grounded in fact and provide reliable information.

Implementing the “I do not know” prompt

Hallucinations generally occur when the AI model attempts to generate a response, regardless of whether it has the necessary knowledge. To tackle this, programming the model to produce an “I do not know” output when uncertain can be a practical solution.

Take an example from a customer service setting where the AI model might be asked about a product feature it does not know. Instead of the AI creating a false, ‘hallucinated’ feature, programming a threshold of uncertainty can lead the model to respond with “I do not know.” This could prompt the user to provide more context or ask another question the model can answer accurately.

Requesting references

Furthermore, encouraging the model to provide references for its outputs adds another layer of reliability. For instance, if an AI model is used in an educational setting to teach history, it might provide information about a specific event. However, without a reference, it’s hard to judge if the information is a hallucination or factual. By asking for sources or references, users can cross-verify the facts themselves.

Adding a layer of tools like to this approach can provide additional security. is a state-of-the-art AI platform equipped with several features to improve the reliability of AI models. Its ability to provide references for the information generated by AI is instrumental in combating hallucinations.

Consider a scenario where an AI is used to draft content on a complex topic like quantum physics. By integrating, users can verify the technical information provided by the AI and have a set of resources to delve deeper into the subject.


While AI language models like GPT-4 are shaping up as potent tools, mitigating potential issues such as hallucinations is crucial. Implementing an “I do not know” prompt, asking for references, and using innovative tools like to verify those references are strategies that can significantly improve the accuracy and reliability of these models.

As we step further into an AI-driven era, integrating such strategies will enhance the efficacy of these tools and foster a sense of trust among the users, making these models a reliable resource for future applications.

Harvey Castro is a physician, health care consultant, and serial entrepreneur with extensive experience in the health care industry. He can be reached on his website,, Twitter @HarveycastroMD, Facebook, Instagram, and YouTube. He is the author of Bing Copilot and Other LLM: Revolutionizing Healthcare With AI, Solving Infamous Cases with Artificial Intelligence, The AI-Driven Entrepreneur: Unlocking Entrepreneurial Success with Artificial Intelligence Strategies and Insights, ChatGPT and Healthcare: The Key To The New Future of Medicine, ChatGPT and Healthcare: Unlocking The Potential Of Patient Empowerment, Revolutionize Your Health and Fitness with ChatGPT’s Modern Weight Loss Hacks, and Success Reinvention.

Source link

About The Author

Scroll to Top