We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

q-fin.CP

Change to browse by:

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Quantitative Finance > Computational Finance

Title: Construction of a Japanese Financial Benchmark for Large Language Models

Abstract: With the recent development of large language models (LLMs), models that focus on certain domains and languages have been discussed for their necessity. There is also a growing need for benchmarks to evaluate the performance of current LLMs in each domain. Therefore, in this study, we constructed a benchmark comprising multiple tasks specific to the Japanese and financial domains and performed benchmark measurements on some models. Consequently, we confirmed that GPT-4 is currently outstanding, and that the constructed benchmarks function effectively. According to our analysis, our benchmark can differentiate benchmark scores among models in all performance ranges by combining tasks with different difficulties.
Comments: 9 pages, Joint Workshop of the 7th Financial Technology and Natural Language Processing (FinNLP), the 5th Knowledge Discovery from Unstructured Data in Financial Services (KDF), and The 4th Workshop on Economics and Natural Language Processing (ECONLP) In conjunction with LREC-COLING-2024
Subjects: Computational Finance (q-fin.CP); Computation and Language (cs.CL)
Cite as: arXiv:2403.15062 [q-fin.CP]
  (or arXiv:2403.15062v1 [q-fin.CP] for this version)

Submission history

From: Masanori Hirano [view email]
[v1] Fri, 22 Mar 2024 09:40:27 GMT (154kb,D)

Link back to: arXiv, form interface, contact.