We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CL

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computation and Language

Title: Systematic Offensive Stereotyping (SOS) Bias in Language Models

Abstract: In this paper, we propose a new metric to measure the SOS bias in language models (LMs). Then, we validate the SOS bias and investigate the effectiveness of removing it. Finally, we investigate the impact of the SOS bias in LMs on their performance and fairness on hate speech detection. Our results suggest that all the inspected LMs are SOS biased. And that the SOS bias is reflective of the online hate experienced by marginalized identities. The results indicate that using debias methods from the literature worsens the SOS bias in LMs for some sensitive attributes and improves it for others. Finally, Our results suggest that the SOS bias in the inspected LMs has an impact on their fairness of hate speech detection. However, there is no strong evidence that the SOS bias has an impact on the performance of hate speech detection.
Comments: Keywords: Systematic offensive stereotyping (SOS) bias, Language models, bias removal, fairness, hate speech detection
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2308.10684 [cs.CL]
  (or arXiv:2308.10684v2 [cs.CL] for this version)

Submission history

From: Fatma Elsafoury [view email]
[v1] Mon, 21 Aug 2023 12:37:42 GMT (339kb,D)
[v2] Fri, 26 Apr 2024 08:45:35 GMT (530kb,D)

Link back to: arXiv, form interface, contact.