{"id":33,"date":"2026-02-12T20:00:25","date_gmt":"2026-02-12T20:00:25","guid":{"rendered":"https:\/\/respect.acm.org\/2026\/?page_id=33"},"modified":"2026-02-12T20:00:25","modified_gmt":"2026-02-12T20:00:25","slug":"acm-respect-ai-statement","status":"publish","type":"page","link":"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/","title":{"rendered":"ACM Respect AI Statement"},"content":{"rendered":"\n<p>RESPECT AI Statement&nbsp;<\/p>\n\n\n\n<p><em>Drafted by Gabriel Medina-Kim<\/em><\/p>\n\n\n\n<p>We would like to emphasize our commitment to minoritized communities, given the proliferation of AI research tools. Our position supplements the ACM Policy on Authorship. <strong>We strongly discourage the use of AI to produce research, especially when generating findings, analyzing data, identifying related works, and reviewing literature. <\/strong>Given our community\u2019s focus on equity in computing and technology, we are especially sensitive to how AI tools (including contemporary transformer-based models) produce inaccurate and prejudicial information about minoritized peoples. To be abundantly clear, there are additional issues of AI use that are urgent for the RESPECT community (e.g., environmental injustice, debilitation, and colonial occupation). However, in this statement, we advise our community that AI is not suited to produce quality research about equity and minoritized communities.<\/p>\n\n\n\n<p>Although popular headlines describe algorithmic inequality as an issue of glitches and bad actors, we recognize that algorithmic inequality is unremarkably regular (Broussard, 2023; Benjamin, 2019). It is the consequence of how these systems are developed, evaluated, and distributed.&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Development is usually based on data that misrepresents information about minoritized people. The issue is greater than \u201cgarbage in, garbage out,\u201d where discriminatory AI models are the consequence of discriminatory data (O\u2019Neil, 2017).\u00a0 AI codifies ways of classifying the world where representable measures are more truthful than those they describe. This is demonstrated through the assumption that AI can \u201cdetect\u201d gender from external characteristics (e.g., faces, names, voices), such that one\u2019s own identification with an essential, binary gender is superfluous (Scheuerman et al., 2021; Keyes, 2018).\u00a0<\/li>\n\n\n\n<li>The evaluation of these models does not prioritize the well-being of minoritized people. Instead, they perpetuate norms about appropriate and ideal users, especially able-bodied users (Whittaker et al., 2019). And in the cases when we are considered, we are rarely considered proactively (Raji &amp; Buolamwini, 2019; Buolamwini &amp; Gebru, 2018; Noble, 2018).\u00a0<\/li>\n\n\n\n<li>Finally, these platforms are inappropriately distributed as generalist tools that model \u201call language.\u201d However, there is no one-size-fits-all AI language model because all knowledge is socially situated. (This is why we require positionality statements for our papers). The RESPECT conference explores subjugated knowledge and specialized knowledge, which uses premises and values that these AI platforms are not equipped for and do not prioritize.<\/li>\n<\/ul>\n\n\n\n<p>None of these findings are new or theoretical. They build off existing empirical and humanistic research in computer science, information sciences, archival research, activist research traditions (i.e., ethnic studies, gender studies), and the humanities. To summarize, AI platforms are not suited to produce quality research about equity and minoritized communities because\u2014by default\u2014they are predisposed to reproduce inequitable and inappropriate knowledge for research and policy.<\/p>\n\n\n\n<p>The RESPECT conference continues to accept research into the relationships between artificial intelligence and equity in computing. However, research produced by AI opposes the core values of the RESPECT community and, by extension, produces inexcusably low-quality research. We strongly discourage using AI research tools to produce knowledge and policy because it undermines our commitment to equity in engineering, computing, and technology.<\/p>\n\n\n\n<p><strong>Bibliography<\/strong><\/p>\n\n\n\n<p>Bailey, M. (2018). On misogynoir: Citation, erasure, and plagiarism. <em>Feminist Media Studies<\/em>, <em>18<\/em>(4), 762\u2013768. https:\/\/doi.org\/10.1080\/14680777.2018.1447395<\/p>\n\n\n\n<p>Benjamin, R. (2019). <em>Race after technology: Abolitionist tools for the new jim code<\/em>. Polity Press.<\/p>\n\n\n\n<p>Broussard, M. (2023). <em>More than a glitch: Confronting race, gender, and ability bias in tech<\/em>. The MIT Press.<\/p>\n\n\n\n<p>Buolamwini, J., &amp; Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. <em>Conference on Fairness, Accountability and Transparency<\/em>, 77\u201391. http:\/\/proceedings.mlr.press\/v81\/buolamwini18a.html<\/p>\n\n\n\n<p>Keyes, O. (2018). The Misgendering Machines: Trans\/HCI Implications of Automatic Gender Recognition. <em>Proceedings of the ACM on Human-Computer Interaction<\/em>, <em>2<\/em>(CSCW), 88:1-88:22. https:\/\/doi.org\/10.1145\/3274357<\/p>\n\n\n\n<p>Noble, S. U. (2018). <em>Algorithms of oppression: How search engines reinforce racism<\/em>. New York University Press.<\/p>\n\n\n\n<p>O\u2019Neil, C. (2017). <em>Weapons of math destruction: How big data increases inequality and threatens democracy<\/em>. Crown.<\/p>\n\n\n\n<p>Raji, I. D., &amp; Buolamwini, J. (2019). Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. <em>Proceedings of the 2019 AAAI\/ACM Conference on AI, Ethics, and Society<\/em>, 429\u2013435. https:\/\/doi.org\/10.1145\/3306618.3314244<\/p>\n\n\n\n<p>Scheuerman, M. K., Pape, M., &amp; Hanna, A. (2021). Auto-essentialization: Gender in automated facial analysis as extended colonial project. <em>Big Data &amp; Society<\/em>, <em>8<\/em>(2), 1\u201315. https:\/\/doi.org\/10.1177\/20539517211053712<\/p>\n\n\n\n<p>Whittaker, M., Alper, M., Bennett, C. L., Hendren, S., Kaziunas, L., Mills, M., Morris, M. R., Rankin, J., Rogers, E., Salas, M., &amp; West, S. M. (2019). <em>Disability, Bias, and AI<\/em>. AI Now Institute.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>RESPECT AI Statement&nbsp; Drafted by Gabriel Medina-Kim We would like to emphasize our commitment to minoritized communities, given the proliferation of AI research tools. Our position supplements the ACM Policy on Authorship. We strongly discourage the use of AI to produce research, especially when generating findings, analyzing data, identifying related works, and reviewing literature. Given [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"zakra_page_container_layout":"customizer","zakra_page_sidebar_layout":"customizer","zakra_remove_content_margin":false,"zakra_sidebar":"customizer","zakra_transparent_header":"customizer","zakra_logo":0,"zakra_main_header_style":"default","zakra_menu_item_color":"","zakra_menu_item_hover_color":"","zakra_menu_item_active_color":"","zakra_menu_active_style":"","zakra_page_header":true,"footnotes":""},"class_list":["post-33","page","type-page","status-publish","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>ACM Respect AI Statement -<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"ACM Respect AI Statement -\" \/>\n<meta property=\"og:description\" content=\"RESPECT AI Statement&nbsp; Drafted by Gabriel Medina-Kim We would like to emphasize our commitment to minoritized communities, given the proliferation of AI research tools. Our position supplements the ACM Policy on Authorship. We strongly discourage the use of AI to produce research, especially when generating findings, analyzing data, identifying related works, and reviewing literature. Given [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/\",\"url\":\"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/\",\"name\":\"ACM Respect AI Statement -\",\"isPartOf\":{\"@id\":\"https:\/\/respect.acm.org\/2026\/#website\"},\"datePublished\":\"2026-02-12T20:00:25+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/respect.acm.org\/2026\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"ACM Respect AI Statement\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/respect.acm.org\/2026\/#website\",\"url\":\"https:\/\/respect.acm.org\/2026\/\",\"name\":\"\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/respect.acm.org\/2026\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"ACM Respect AI Statement -","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/","og_locale":"en_US","og_type":"article","og_title":"ACM Respect AI Statement -","og_description":"RESPECT AI Statement&nbsp; Drafted by Gabriel Medina-Kim We would like to emphasize our commitment to minoritized communities, given the proliferation of AI research tools. Our position supplements the ACM Policy on Authorship. We strongly discourage the use of AI to produce research, especially when generating findings, analyzing data, identifying related works, and reviewing literature. Given [&hellip;]","og_url":"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/","twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/","url":"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/","name":"ACM Respect AI Statement -","isPartOf":{"@id":"https:\/\/respect.acm.org\/2026\/#website"},"datePublished":"2026-02-12T20:00:25+00:00","breadcrumb":{"@id":"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/respect.acm.org\/2026\/index.php\/acm-respect-ai-statement\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/respect.acm.org\/2026\/"},{"@type":"ListItem","position":2,"name":"ACM Respect AI Statement"}]},{"@type":"WebSite","@id":"https:\/\/respect.acm.org\/2026\/#website","url":"https:\/\/respect.acm.org\/2026\/","name":"","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/respect.acm.org\/2026\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/respect.acm.org\/2026\/index.php\/wp-json\/wp\/v2\/pages\/33","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/respect.acm.org\/2026\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/respect.acm.org\/2026\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/respect.acm.org\/2026\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/respect.acm.org\/2026\/index.php\/wp-json\/wp\/v2\/comments?post=33"}],"version-history":[{"count":2,"href":"https:\/\/respect.acm.org\/2026\/index.php\/wp-json\/wp\/v2\/pages\/33\/revisions"}],"predecessor-version":[{"id":168,"href":"https:\/\/respect.acm.org\/2026\/index.php\/wp-json\/wp\/v2\/pages\/33\/revisions\/168"}],"wp:attachment":[{"href":"https:\/\/respect.acm.org\/2026\/index.php\/wp-json\/wp\/v2\/media?parent=33"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}