?
Truth-O-Meter: Handling Multiple Inconsistent Sources Repairing LLM Hallucinations
Large Language Models (LLM) often produce text with incorrect facts and hallucinations. To address this issue, we developed a fact-checking system Truth-O-Meter which verifies LLM results on the Internet and other sources of information to detect wrong claims/facts and proposes corrections for them. NLP and reasoning techniques such as Abstract Meaning Representation and syntactic alignment are applied to match hallucinating sentences with truthful ones. To handle inconsistent sources while fact-checking, we rely on argumentation analysis in the form of defeasible logic programming, selecting the most authoritative source. Our evaluation shows that LLM content can be substantially improved for factual correctness and meaningfulness on an industrial scale.