20
Nov
[Submitted on 9 Nov 2023 (v1), last revised 19 Nov 2024 (this version, v2)] Authors:Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu View a PDF of the paper titled A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions, by Lei Huang and 10 other authors View PDF HTML (experimental) Abstract:The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP), fueling a paradigm shift in information acquisition. Nevertheless, LLMs are prone to hallucination, generating plausible…