We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: MedConceptsQA: Open Source Medical Concepts QA Benchmark

Abstract: We present MedConceptsQA, a dedicated open source benchmark for medical concepts question answering. The benchmark comprises of questions of various medical concepts across different vocabularies: diagnoses, procedures, and drugs. The questions are categorized into three levels of difficulty: easy, medium, and hard. We conducted evaluations of the benchmark using various Large Language Models. Our findings show that pre-trained clinical Large Language Models achieved accuracy levels close to random guessing on this benchmark, despite being pre-trained on medical data. However, GPT-4 achieves an absolute average improvement of nearly 27%-37% (27% for zero-shot learning and 37% for few-shot learning) when compared to clinical Large Language Models. Our benchmark serves as a valuable resource for evaluating the understanding and reasoning of medical concepts by Large Language Models. Our benchmark is available at this https URL
Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG)
Cite as: arXiv:2405.07348 [cs.CL]
  (or arXiv:2405.07348v2 [cs.CL] for this version)

Submission history

From: Ofir Ben Shoham [view email]
[v1] Sun, 12 May 2024 17:54:50 GMT (174kb,D)
[v2] Tue, 14 May 2024 16:44:02 GMT (174kb,D)

Link back to: arXiv, form interface, contact.