ACM Respect AI Statement

RESPECT AI Statement 

Drafted by Gabriel Medina-Kim

We would like to emphasize our commitment to minoritized communities, given the proliferation of AI research tools. Our position supplements the ACM Policy on Authorship. We strongly discourage the use of AI to produce research, especially when generating findings, analyzing data, identifying related works, and reviewing literature. Given our community’s focus on equity in computing and technology, we are especially sensitive to how AI tools (including contemporary transformer-based models) produce inaccurate and prejudicial information about minoritized peoples. To be abundantly clear, there are additional issues of AI use that are urgent for the RESPECT community (e.g., environmental injustice, debilitation, and colonial occupation). However, in this statement, we advise our community that AI is not suited to produce quality research about equity and minoritized communities.

Although popular headlines describe algorithmic inequality as an issue of glitches and bad actors, we recognize that algorithmic inequality is unremarkably regular (Broussard, 2023; Benjamin, 2019). It is the consequence of how these systems are developed, evaluated, and distributed. 

  • Development is usually based on data that misrepresents information about minoritized people. The issue is greater than “garbage in, garbage out,” where discriminatory AI models are the consequence of discriminatory data (O’Neil, 2017).  AI codifies ways of classifying the world where representable measures are more truthful than those they describe. This is demonstrated through the assumption that AI can “detect” gender from external characteristics (e.g., faces, names, voices), such that one’s own identification with an essential, binary gender is superfluous (Scheuerman et al., 2021; Keyes, 2018). 
  • The evaluation of these models does not prioritize the well-being of minoritized people. Instead, they perpetuate norms about appropriate and ideal users, especially able-bodied users (Whittaker et al., 2019). And in the cases when we are considered, we are rarely considered proactively (Raji & Buolamwini, 2019; Buolamwini & Gebru, 2018; Noble, 2018). 
  • Finally, these platforms are inappropriately distributed as generalist tools that model “all language.” However, there is no one-size-fits-all AI language model because all knowledge is socially situated. (This is why we require positionality statements for our papers). The RESPECT conference explores subjugated knowledge and specialized knowledge, which uses premises and values that these AI platforms are not equipped for and do not prioritize.

None of these findings are new or theoretical. They build off existing empirical and humanistic research in computer science, information sciences, archival research, activist research traditions (i.e., ethnic studies, gender studies), and the humanities. To summarize, AI platforms are not suited to produce quality research about equity and minoritized communities because—by default—they are predisposed to reproduce inequitable and inappropriate knowledge for research and policy.

The RESPECT conference continues to accept research into the relationships between artificial intelligence and equity in computing. However, research produced by AI opposes the core values of the RESPECT community and, by extension, produces inexcusably low-quality research. We strongly discourage using AI research tools to produce knowledge and policy because it undermines our commitment to equity in engineering, computing, and technology.

Bibliography

Bailey, M. (2018). On misogynoir: Citation, erasure, and plagiarism. Feminist Media Studies, 18(4), 762–768. https://doi.org/10.1080/14680777.2018.1447395

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new jim code. Polity Press.

Broussard, M. (2023). More than a glitch: Confronting race, gender, and ability bias in tech. The MIT Press.

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency, 77–91. http://proceedings.mlr.press/v81/buolamwini18a.html

Keyes, O. (2018). The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 88:1-88:22. https://doi.org/10.1145/3274357

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

O’Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

Raji, I. D., & Buolamwini, J. (2019). Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429–435. https://doi.org/10.1145/3306618.3314244

Scheuerman, M. K., Pape, M., & Hanna, A. (2021). Auto-essentialization: Gender in automated facial analysis as extended colonial project. Big Data & Society, 8(2), 1–15. https://doi.org/10.1177/20539517211053712

Whittaker, M., Alper, M., Bennett, C. L., Hendren, S., Kaziunas, L., Mills, M., Morris, M. R., Rankin, J., Rogers, E., Salas, M., & West, S. M. (2019). Disability, Bias, and AI. AI Now Institute.

Scroll to top