We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.SE

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Software Engineering

Title: Do Large Language Models Pay Similar Attention Like Human Programmers When Generating Code?

Abstract: Large Language Models (LLMs) have recently been widely used for code generation. Due to the complexity and opacity of LLMs, little is known about how these models generate code. We made the first attempt to bridge this knowledge gap by investigating whether LLMs attend to the same parts of a task description as human programmers during code generation. An analysis of six LLMs, including GPT-4, on two popular code generation benchmarks revealed a consistent misalignment between LLMs' and programmers' attention. We manually analyzed 211 incorrect code snippets and found five attention patterns that can be used to explain many code generation errors. Finally, a user study showed that model attention computed by a perturbation-based method is often favored by human programmers. Our findings highlight the need for human-aligned LLMs for better interpretability and programmer trust.
Comments: To appear in 2024 the ACM International Conference on the Foundations of Software Engineering (FSE '24)
Subjects: Software Engineering (cs.SE); Human-Computer Interaction (cs.HC); Machine Learning (cs.LG)
DOI: 10.1145/3660807
Cite as: arXiv:2306.01220 [cs.SE]
  (or arXiv:2306.01220v2 [cs.SE] for this version)

Submission history

From: Zhijie Wang [view email]
[v1] Fri, 2 Jun 2023 00:57:03 GMT (1993kb,D)
[v2] Thu, 23 May 2024 17:27:12 GMT (362kb,D)

Link back to: arXiv, form interface, contact.