A hallucination is a confidently-wrong answer from an LLM — invented facts, fake citations, made-up phone numbers. The cure is grounding (RAG): hand the model your real data and tell it "answer only from this". GIGI on a portal is grounded in your portal data, which is why it is far less likely to hallucinate than a general chatbot would.
AI agents
Hallucination
When an LLM makes up an answer that isn't grounded in real data.
See also
- RAG (Retrieval-Augmented Generation)
Feeding the agent your real documents and portal data so its answers are grounded in your truth.
- LLM
A Large Language Model — the model powering chat, voice, and agent reasoning.
- GIGI
The Global Interactive GEO Interface — the AI assistant + interface that powers hashtag.org search, voice, and video.