We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Prompt-Time Symbolic Knowledge Capture with Large Language Models

Abstract: Augmenting large language models (LLMs) with user-specific knowledge is crucial for real-world applications, such as personal AI assistants. However, LLMs inherently lack mechanisms for prompt-driven knowledge capture. This paper investigates utilizing the existing LLM capabilities to enable prompt-driven knowledge capture, with a particular emphasis on knowledge graphs. We address this challenge by focusing on prompt-to-triple (P2T) generation. We explore three methods: zero-shot prompting, few-shot prompting, and fine-tuning, and then assess their performance via a specialized synthetic dataset. Our code and datasets are publicly available at this https URL
Comments: 8 pages, 5 figures, 1 table preprint. Under review
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
ACM classes: I.2.7
Cite as: arXiv:2402.00414 [cs.CL]
  (or arXiv:2402.00414v1 [cs.CL] for this version)

Submission history

From: Tolga Çöplü [view email]
[v1] Thu, 1 Feb 2024 08:15:28 GMT (4303kb,D)

Link back to: arXiv, form interface, contact.