We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Formal Specification, Assessment, and Enforcement of Fairness for Generative AIs

Abstract: The risk of reinforcing or exacerbating societal biases and inequalities is growing as generative AI increasingly produces content that resembles human output, from text to images and beyond. Here we formally characterize the notion of fairness for generative AI as a basis for monitoring and enforcing fairness. We define two levels of fairness utilizing the concept of infinite words. The first is the fairness demonstrated on the generated sequences, which is only evaluated on the outputs while agnostic to the prompts/models used. The second is the inherent fairness of the generative AI model, which requires that fairness be manifested when input prompts are neutral, that is, they do not explicitly instruct the generative AI to produce a particular type of output. We also study relative intersectional fairness to counteract the combinatorial explosion of fairness when considering multiple categories together with lazy fairness enforcement. Our implemented specification monitoring and enforcement tool shows interesting results when tested against several generative AI models.
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Logic in Computer Science (cs.LO); Software Engineering (cs.SE)
Cite as: arXiv:2404.16663 [cs.LG]
  (or arXiv:2404.16663v1 [cs.LG] for this version)

Submission history

From: Chih-Hong Cheng [view email]
[v1] Thu, 25 Apr 2024 15:04:27 GMT (3283kb,D)
[v2] Fri, 26 Apr 2024 09:30:25 GMT (3282kb,D)
[v3] Mon, 6 May 2024 06:50:15 GMT (3283kb,D)

Link back to: arXiv, form interface, contact.