How Does Perceived Fairness Differ? Exploring Multicultural Landscape for Ethical AI Design


Workshop paper


Sullam Jeoung, Youngsam Chun

Cite

Cite

APA   Click to copy
Jeoung, S., & Chun, Y. How Does Perceived Fairness Differ? Exploring Multicultural Landscape for Ethical AI Design.


Chicago/Turabian   Click to copy
Jeoung, Sullam, and Youngsam Chun. How Does Perceived Fairness Differ? Exploring Multicultural Landscape for Ethical AI Design, n.d.


MLA   Click to copy
Jeoung, Sullam, and Youngsam Chun. How Does Perceived Fairness Differ? Exploring Multicultural Landscape for Ethical AI Design.


BibTeX   Click to copy

@techreport{sullam-a,
  title = {How Does Perceived Fairness Differ? Exploring Multicultural Landscape for Ethical AI Design},
  author = {Jeoung, Sullam and Chun, Youngsam}
}

Abstract
With the help of global AI-enabled systems, there is a hype that human bias will be removed, and fairness will be im- proved in decision-making processes. However, most of the AI fairness studies have focused on the US and Western Eu- rope, which raises concerns about another form of discrimi- nation due to the relative underrepresentation of geographic areas such as Africa, South American, and Asia. We be- lieve that the perception and understanding of algorithmic fairness are likely to be influenced profoundly by local con- texts and it is imperative to explore the multicultural percep- tions of fairness since the technologies and decision making processes are heading towards globalization. Thus, we pro- pose a multicultural online experiment about how the public perceives algorithmic fairness and how they make decisions against the ethical dilemma. In particular, the social biases that participants perceive most sensitively in the job hiring process are presented, and how the perceived biases af- fect decision-making is examined under different cultural set- tings. Our goal for the experiment is to discuss: (1) whether people in Asian culture perceive individual fairness or group fairness as equally important as Western; (2) and whether sensitive attributes are identical across countries. We ex- pect that the values and metrics of fairness may differ across cultures; culture changes the value of algorithmic fairness. This large-scale online experiment provides ethical implications and fair machine guidelines for developers who want to globalize decision support systems, including various minority countries.

PDF

Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in