Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/183925
Title: Trustworthy of large language models
Authors: Yang, Xiaoyue
Keywords: Computer and Information Science
Issue Date: 2025
Publisher: Nanyang Technological University
Source: Yang, X. (2025). Trustworthy of large language models. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/183925
Project: CCDS24-0739 
Abstract: Large language models (LLMs) are widely used nowadays across various applications. Despite their usefulness, there have been growing concerns about the integrity of the information they generate. Hallucination, the phenomenon where LLMs produce content that contradicts factual knowledge, has been observed in many real-world scenarios. This paper investigates the hallucination problem and explores the use of Retrieval-Augmented Generation (RAG) and Prompt Engineering (PE) as mitigation strategies. Experimental results indicate that RAG alone is effective in reducing hallucinations in fact-based question-answering (QA) tasks. However, for long-form texts, more advanced methods are required for the model to develop a deeper semantic understanding of the retrieved content.
URI: https://hdl.handle.net/10356/183925
Schools: College of Computing and Data Science 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:CCDS Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Yang_Xiaoyue_CCDS24-0739.pdf
  Restricted Access
1.54 MBAdobe PDFView/Open

Page view(s)

18
Updated on May 7, 2025

Download(s)

1
Updated on May 7, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.