Need for Organizational Performance Metrics to Support Fairness in AI


Workshop paper


Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, Hanna Wallach

Cite

Cite

APA   Click to copy
Madaio, M. A., Stark, L., Vaughan, J. W., & Wallach, H. Need for Organizational Performance Metrics to Support Fairness in AI.


Chicago/Turabian   Click to copy
Madaio, Michael A., Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. Need for Organizational Performance Metrics to Support Fairness in AI, n.d.


MLA   Click to copy
Madaio, Michael A., et al. Need for Organizational Performance Metrics to Support Fairness in AI.


BibTeX   Click to copy

@techreport{michael-a,
  title = {Need for Organizational Performance Metrics to Support Fairness in AI},
  author = {Madaio, Michael A. and Stark, Luke and Vaughan, Jennifer Wortman and Wallach, Hanna}
}

Abstract
Following the announcements of dozens of AI ethics statements and high-level principles for responsible AI, technologists are beginning to operationalize values such as fairness into metrics, toolkits, and checklists to impact AI product development. However, while individual AI practitioners may want to use such methods to develop more fair and responsible AI products, there are organizational incentives inhibiting individuals from advocating for and addressing fairness issues. In this workshop paper, we present new findings from an AI fairness checklist co-design research project [6] that suggest directions and open questions for developing organizational performance metrics to support AI fairness efforts, focusing on the challenges of conceptualizing and designing fairness metrics that are both effective and legible to organizations. We intend for this paper to spark discussion in the community around aligning organizational culture to support responsible AI development.

PDF

Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in