References & Citations
Computer Science > Software Engineering
Title: Multitask-based Evaluation of Open-Source LLM on Software Vulnerability
(Submitted on 2 Apr 2024 (v1), last revised 26 Apr 2024 (this version, v2))
Abstract: This paper proposes a pipeline for quantitatively evaluating interactive LLMs using publicly available datasets. We carry out an extensive technical evaluation of LLMs using Big-Vul covering four different common software vulnerability tasks. We evaluate the multitask and multilingual aspects of LLMs based on this dataset. We find that the existing state-of-the-art methods are generally superior to LLMs in software vulnerability detection. Although LLMs improve accuracy when providing context information, they still have limitations in accurately predicting severity ratings for certain CWE types. In addition, LLMs demonstrate some ability to locate vulnerabilities for certain CWE types, but their performance varies among different CWE types. Finally, LLMs show uneven performance in generating CVE descriptions for various CWE types, with limited accuracy in a few-shot setting. Overall, though LLMs perform well in some aspects, they still need improvement in understanding the subtle differences in code vulnerabilities and the ability to describe vulnerabilities to fully realize their potential. Our evaluation pipeline provides valuable insights for further enhancing LLMs' software vulnerability handling capabilities.
Submission history
From: Xin Yin [view email][v1] Tue, 2 Apr 2024 15:52:05 GMT (796kb,D)
[v2] Fri, 26 Apr 2024 03:01:48 GMT (796kb,D)
Link back to: arXiv, form interface, contact.