We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: ReproHum #0087-01: Human Evaluation Reproduction Report for Generating Fact Checking Explanations

Abstract: This paper presents a partial reproduction of Generating Fact Checking Explanations by Anatanasova et al (2020) as part of the ReproHum element of the ReproNLP shared task to reproduce the findings of NLP research regarding human evaluation. This shared task aims to investigate the extent to which NLP as a field is becoming more or less reproducible over time. Following the instructions provided by the task organisers and the original authors, we collect relative rankings of 3 fact-checking explanations (comprising a gold standard and the outputs of 2 models) for 40 inputs on the criteria of Coverage. The results of our reproduction and reanalysis of the original work's raw results lend support to the original findings, with similar patterns seen between the original work and our reproduction. Whilst we observe slight variation from the original results, our findings support the main conclusions drawn by the original authors pertaining to the efficacy of their proposed models.
Comments: Accepted to HumEval at LREC-Coling 2024
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2404.17481 [cs.CL]
  (or arXiv:2404.17481v1 [cs.CL] for this version)

Submission history

From: Tyler Loakman [view email]
[v1] Fri, 26 Apr 2024 15:31:25 GMT (1299kb,D)

Link back to: arXiv, form interface, contact.