References & Citations
Computer Science > Computation and Language
Title: Self-Prompting Large Language Models for Zero-Shot Open-Domain QA
(Submitted on 16 Dec 2022 (v1), revised 16 May 2023 (this version, v2), latest version 28 Mar 2024 (v3))
Abstract: Open-Domain Question Answering (ODQA) aims at answering factoid questions without explicitly providing specific background documents. In a zero-shot setting, this task is more challenging since no data is available to train customized models like Retriever-Readers. Recently, Large Language Models (LLMs) like GPT-3 have shown their power in zero-shot ODQA with direct prompting methods, but these methods are still far from releasing the full powerfulness of LLMs only in an implicitly invoking way. In this paper, we propose a Self-Prompting framework to explicitly utilize the massive knowledge stored in the parameters of LLMs and their strong instruction understanding abilities. Concretely, we prompt LLMs step by step to generate multiple pseudo QA pairs with background passages and explanations from scratch and then use those generated elements for in-context learning. Experimental results show our method surpasses previous SOTA methods significantly on three widely-used ODQA datasets, and even achieves comparable performance with some Retriever-Reader models fine-tuned on full training data.
Submission history
From: Junlong Li [view email][v1] Fri, 16 Dec 2022 18:23:43 GMT (7347kb,D)
[v2] Tue, 16 May 2023 11:29:15 GMT (7126kb,D)
[v3] Thu, 28 Mar 2024 06:06:59 GMT (7930kb,D)
Link back to: arXiv, form interface, contact.