We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Promptly Predicting Structures: The Return of Inference

Abstract: Prompt-based methods have been used extensively across NLP to build zero- and few-shot label predictors. Many NLP tasks are naturally structured: that is, their outputs consist of multiple labels which constrain each other. Annotating data for such tasks can be cumbersome. Can the promise of the prompt-based paradigm be extended to such structured outputs? In this paper, we present a framework for constructing zero- and few-shot linguistic structure predictors. Our key insight is that we can use structural constraints -- and combinatorial inference derived from them -- to filter out inconsistent structures predicted by large language models. We instantiated this framework on two structured prediction tasks, and five datasets. Across all cases, our results show that enforcing consistency not only constructs structurally valid outputs, but also improves performance over the unconstrained variants.
Comments: 19 pages, 13 figures Accepted to NAACL'2024 (Main)
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2401.06877 [cs.CL]
  (or arXiv:2401.06877v3 [cs.CL] for this version)

Submission history

From: Maitrey Mehta [view email]
[v1] Fri, 12 Jan 2024 20:08:39 GMT (358kb,D)
[v2] Thu, 28 Mar 2024 17:17:17 GMT (426kb,D)
[v3] Fri, 29 Mar 2024 18:27:17 GMT (364kb,D)

Link back to: arXiv, form interface, contact.