We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Fairness Auditing with Multi-Agent Collaboration

Abstract: Existing work in fairness auditing assumes that each audit is performed independently. In this paper, we consider multiple agents working together, each auditing the same platform for different tasks. Agents have two levers: their collaboration strategy, with or without coordination beforehand, and their strategy for sampling appropriate data points. We theoretically compare the interplay of these levers. Our main findings are that (i) collaboration is generally beneficial for accurate audits, (ii) basic sampling methods often prove to be effective, and (iii) counter-intuitively, extensive coordination on queries often deteriorates audits accuracy as the number of agents increases. Experiments on three large datasets confirm our theoretical results. Our findings motivate collaboration during fairness audits of platforms that use ML models for decision-making.
Comments: 13 pages, 6 figures
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2402.08522 [cs.LG]
  (or arXiv:2402.08522v2 [cs.LG] for this version)

Submission history

From: Jade Garcia Bourrée [view email]
[v1] Tue, 13 Feb 2024 15:24:46 GMT (521kb,D)
[v2] Fri, 26 Apr 2024 13:44:32 GMT (446kb,D)

Link back to: arXiv, form interface, contact.