GIGIlistening…

AI agents

Hallucination

When an LLM makes up an answer that isn't grounded in real data.

A hallucination is a confidently-wrong answer from an LLM — invented facts, fake citations, made-up phone numbers. The cure is grounding (RAG): hand the model your real data and tell it "answer only from this". GIGI on a portal is grounded in your portal data, which is why it is far less likely to hallucinate than a general chatbot would.

← Back to the full glossary