We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Large Language Models Are Unconscious of Unreasonability in Math Problems

Abstract: Large language models (LLMs) demonstrate substantial capabilities in solving math problems. However, they tend to produce hallucinations when given questions containing unreasonable errors. In this paper, we study the behavior of LLMs when faced with unreasonable math problems and further explore their potential to address these problems. We construct the Unreasonable Math Problem (UMP) benchmark to examine the error detection ability of LLMs. Experiments show that LLMs are able to detect unreasonable errors, but still fail in generating non-hallucinatory content. In order to improve their ability of error detection and correction, we further design a strategic prompt template called Critical Calculation and Conclusion(CCC). With CCC, LLMs can better self-evaluate and detect unreasonable errors in math questions, making them more reliable and safe in practical application scenarios.
Comments: 11 pages, 3 figures
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2403.19346 [cs.CL]
  (or arXiv:2403.19346v2 [cs.CL] for this version)

Submission history

From: Jingyuan Ma [view email]
[v1] Thu, 28 Mar 2024 12:04:28 GMT (3000kb,D)
[v2] Tue, 16 Apr 2024 13:53:47 GMT (3723kb,D)

Link back to: arXiv, form interface, contact.