NASA Logo

NTRS

NTRS - NASA Technical Reports Server

Due to the lapse in federal government funding, NASA is not updating this website. We sincerely regret this inconvenience.

Back to Results
Trust-Informed Large Language Models via Word Embedding-Knowledge Graph AlignmentA major weakness of a Large Language Model (LLM) is its tendency to accept information at
face value, often leading to injection of erroneous information and inducing a greater probability of hallucinating non-existent information. While Retrieval Augmented Generation (RAG) uses external knowledge sources to bolster LLMs through grounded truth, this work seeks to explore methods to engender a LLM with an intrinsic capability to evaluate an input’s believability without relying on external knowledge sources. We investigate unifying a LLM with a Knowledge Graph (KG) and using the KG to reinforce the LLM’s internal word embedding while also maintaining belief metrics along the edge’s in the KG.
Document ID
20240015650
Acquisition Source
Langley Research Center
Document Type
Conference Paper
Authors
James E. Ecker
(Langley Research Center Hampton, United States)
Bonnie Danette Allen
(National Aeronautics and Space Administration Washington, United States)
Date Acquired
December 6, 2024
Subject Category
Cybernetics, Artificial Intelligence and Robotics
Meeting Information
Meeting: AIAA SciTech Forum
Location: Orlando, FL
Country: US
Start Date: January 6, 2025
End Date: January 10, 2025
Sponsors: American Institute of Aeronautics and Astronautics
Funding Number(s)
WBS: 981698.03.04.23.20.01.12
Distribution Limits
Public
Copyright
Portions of document may include copyright protected material.
Technical Review
NASA Technical Management
Keywords
Large Language Model
Large Language Models
Knowledge Graphs
Word Embeddings
Natural Language Processing
Knowledge Management
No Preview Available