CINXE.COM

ACM FAccT - 2024 Accepted Tutorial Sessions

<!DOCTYPE html> <html class="no-js"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title>ACM FAccT - 2024 Accepted Tutorial Sessions </title> <meta name="description" content=""> <meta name="viewport" content="width=device-width"> <!--<link rel="stylesheet" href="/static/css/bootstrap.css"> <link rel="stylesheet" href="/static/css/bootstrap-theme.css">--> <link rel="stylesheet" href="/static/css/bootstrap-spacelab.min.css"> <link rel="stylesheet" href="/static/css/main.css"> <script src="/static/js/vendor/modernizr-2.6.2.js"></script> </head> <body> <div class="navbar navbar-default navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse"> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class='navbar-brand' href='/'>ACM FAccT Conference</a> </div> <div class="navbar-collapse collapse"> <ul class="nav navbar-nav"> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2025 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2025/'>Home</a></li> <li class="divider"></li> <li class=""><a href='/2025/cfp'>Call for Papers</a></li> <li class="divider"></li> <li class=""><a href='/2025/aguide'>Author Guide</a></li> <li class=""><a href='/2025/rguide'>Reviewer Guide</a></li> <li class=""><a href='/2025/acguide'>AC Guide</a></li> <li class=""><a href='/2025/rform'>Sample Reviewer Form</a></li> <li class=""><a href='/2025/acform'>Sample AC Form</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2024 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2024/schedule'>Conference schedule</a></li> <li class=""><a href='/2024/acceptedcraft'>Accepted CRAFTs</a></li> <li class=""><a href='/2024/acceptedpapers'>Accepted Papers</a></li> <li class="active"><a href='/2024/acceptedtutorials'>Accepted Tutorials</a></li> <li class=""><a href='/2024/acceptedces'>Accepted Community Empowerment Socials</a></li> <li class="divider"></li> <li class=""><a href='/2024/'>Home</a></li> <li class=""><a href='/2024/cfp'>Call for Papers</a></li> <li class=""><a href='/2024/callfordc'>Doctoral Colloquium Call for Applications</a></li> <li class=""><a href='/2024/cft'>Call for Tutorials</a></li> <li class=""><a href='/2024/cfpcraft'>Call for CRAFT</a></li> <li class=""><a href='/2024/cfces'>Call for Community Empowerment Socials</a></li> <li class=""><a href='/2024/cfv'>Call for Volunteers</a></li> <li class="divider"></li> <li class=""><a href='/2024/dei'>Diversity and Inclusion Programs</a></li> <li class=""><a href='/2024/deischolars'>DEI Scholars</a></li> <li class=""><a href='/2024/deisupport'>Participant Support</a></li> <li class=""><a href='/2024/community'>Community Agreements</a></li> <li class="divider"></li> <li class=""><a href='/2024/scholarships'>Financial Support</a></li> <li class=""><a href='/2024/sponsors'>Sponsors and Supporters</a></li> <li class=""><a href='/2024/committees'>Committees</a></li> <li class="divider"></li> <li class=""><a href='/2024/visainf'>Visa Information</a></li> <!-- add a new item conference registration and link to https://cvent.me/xBrvwq --> <li><a href="https://cvent.me/xBrvwq">Conference Registration</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2023 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2023/'>Home</a></li> <li class=""><a href='/2023/registration-archive'>Registration and Financial Support (closed)</a></li> <li class=""><a href='/2023/harm-policy'>Statement on AI Harms and Policy</a></li> <li class=""><a href="https://form.typeform.com/to/aL59wmNs">Conference Survey</a></li> <li class="divider"></li> <li class=""><a href='/2023/schedule'>Program Schedule</a></li> <li class=""><a href='/2023/keynotes'>Keynote Speakers</a></li> <li class=""><a href='/2023/acceptedpapers'>Accepted Papers</a></li> <li class=""><a href='/2023/acceptedtuts'>Accepted Tutorials</a></li> <li class=""><a href='/2023/acceptedcraft'>Accepted CRAFTs</a></li> <li class=""><a href='/2023/closedcalls'>Closed Calls</a></li> <li class="divider"></li> <li class=""><a href='/2023/del'>Diversity and Inclusion Programs</a></li> <li class=""><a href='/2023/deischolars'>DEI Scholars</a></li> <li class=""><a href='/2023/community'>Community Agreements</a></li> <li class="divider"></li> <li class=""><a href='/2023/sponsorship_policy'>Sponsorship Policy</a></li> <li class=""><a href='/2023/sponsors'>Sponsors and Supporters</a></li> <li class=""><a href='/2023/funding_sources_disclosure'>Disclosure of Funding Sources</a></li> <li class="divider"></li> <li class=""><a href='/2023/committees'>Committees</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2022 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2022/'>Home</a></li> <li class=""><a href='/2022/guide'>Conference Guide</a></li> <li class=""><a href='/2022/schedule'>Program Schedule</a></li> <li class=""><a href='/2022/keynotes'>Keynotes</a></li> <li class=""><a href='/2022/acceptedpapers'>Papers</a></li> <li class=""><a href='/2022/prizes'>Paper prizes</a></li> <li class=""><a href='/2022/acceptedcraft'>CRAFT Sessions</a></li> <li class=""><a href='/2022/acceptedtuts'>Tutorials</a></li> <li class=""><a href='/2022/dcpart'>Doctoral Consortium</a></li> <li class=""><a href='/2022/visas'>Venue, Travel, and COVID Information</a></li> <li class=""><a href='/2022/registration'>Registration</a></li> <li class=""><a href='/2022/closedcalls'>Closed Calls</a></li> <li class=""><a href='/2022/scholarships'>Financial Support</a></li> <li class="divider"></li> <li class=""><a href='/2022/deischolars'>DEI Scholars</a></li> <li class=""><a href='/2022/outputs'>CRAFT Outputs</a></li> <li class=""><a href='/2022/authors'>Instructions for Authors</a></li> <li class=""><a href='/2022/merch'>Conference Memorabilia</a></li> <li class=""><a href='/2022/faq'>FAQ</a></li> <li class="divider"></li> <li class=""><a href='/2022/committees'>Committees</a></li> <li class=""><a href='/2022/inclusion'>Diversity and Inclusion</a></li> <li class=""><a href='/2022/community'>Community Agreements</a></li> <li class=""><a href='/2022/sponsorship_policy'>Sponsorship Policy</a></li> <li class=""><a href='/2022/sponsors'>Sponsors and Supporters</a></li> <li class=""><a href='/2022/funding_sources_disclosure'>Disclosure of Funding Sources</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2021 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2021/'>Home</a></li> <li class="{'% if_current_page 'https://2021.facctconference.org/ 'active' '' %}"><a href="https://2021.facctconference.org/">Online conference hub</a></li> <li class=""><a href='/2021/welcome'>Welcome from the General Chairs</a></li> <li class=""><a href='/2021/community'>Community Agreements</a></li> <li class=""><a href='/2021/inclusion'>Diversity and inclusion</a></li> <li class=""><a href='/2021/registration'>Registration</a></li> <li class="divider"></li> <li class=""><a href='/2021/programschedule'>Program Schedule</a></li> <li class=""><a href='/2021/keynotes'>Keynote Speakers</a></li> <li class=""><a href='/2021/acceptedpapers'>Accepted Papers</a></li> <li class=""><a href='/2021/acceptedtuts'>Accepted Tutorials</a></li> <li class=""><a href='/2021/acceptedcraftsessions'>Accepted CRAFT Sessions</a></li> <li class="divider"></li> <li class=""><a href='/2021/cfp'>Call for Papers (Closed)</a></li> <li class=""><a href='/2021/cft'>Call for Tutorials (Closed)</a></li> <li class=""><a href='/2021/cfw'>Call for CRAFT proposals (Closed)</a></li> <li class=""><a href='/2021/callfordc'>Call for Doctoral Consortium (Closed)</a> <li class=""><a href='/2021/callforvolunteers'>Call for Volunteers (Closed)</a></li> <li class="divider"></li> <li class=""><a href='/2021/committees'>Committees</a></li> <li class=""><a href='/2021/sponsorship'>Sponsors and Supporters</a></li> <li class=""><a href='/2021/sponsorship_policy'>Sponsorship Policy</a></li> <li class="divider"></li> <li class=""><a href='/2021/press-release'>ACM Press release</a></li> <li class=""><a href='/2021/racialequityjustice'>Commitment to Racial Equity + Justice</a></li> <li class=""><a href='/2021/2021_online'>Covid-19 Update</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2018-2020 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2020/'>FAT*2020 Barcelona</a></li> <li class=""><a href='/2019/'>FAT*2019 Atlanta</a></li> <li class=""><a href='/2018/'>FAT*2018 New York</a></li> </ul> </li> </ul> <ul class="nav navbar-nav navbar-right"> <!--<li class=""><a href="/index.html">Home</a></li>--> <li><a href="https://facct-blog.github.io/">Blog</a></li> <li class=""><a href="/network/">Network</a></li> <li class=""><a href='/connect'>Connect</a> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Organization <b class="caret"></b></a> <ul class="dropdown-menu"> <!-- <li class=""><a href="/nominate.html">Open Call for SC Nominations</a></li> --> <li class="divider"></li> <li class=""><a href='/organization'>People &amp; Committees</a></li> <li class=""><a href='/organization-sc'>Steering Committee</a></li> <li class=""><a href='/documents'>Governing Documents</a></li> <li class=""><a href='/harassment'>Anti-Discrimination &amp; Harassment Policy</a></li> <li class=""><a href='/sponsorship'>Sponsorship Policy</a></li> <li class=""><a href='/faq'>FAQ</a></li> <li class="divider"></li> <li class=""><a href='/2023/humanrights'>EC Statement on Technology and Human Rights</a></li> <li class=""><a href='/warfare'>EC Statement on AI Warfare </a></li> </ul> </li> </ul> </div> <!--/.navbar-collapse --> </div> </div> <div class="container"> <div class="page-header"> <h1>ACM FAccT 2024 Tutorial Sessions</h1> </div> </div> <div class="container"> <div class="row"> <div class="col-lg-12"> <p>The goal of tutorials is to broaden the perspective of our interdisciplinary community, addressing practical, technical, policy, regulatory, ethical, or societal issues related to FAccT. We solicited three types of tutorials: Translation Tutorials to foster dialogue between disciplines, Implications Tutorials to describe effects of algorithmic systems in society, and Practice Tutorials focused on a specific tool or frameworks.</p> <h3>Creating Ethical Charters in AI Development</h3> <b>Practice tutorial</b> <i> Margaret Mitchell, Chief Ethics Scientist, Hugging Face; and Giada Pistilli, Principal Ethicist, Hugging Face </i> <p>This tutorial outlines the process of creating ethical charters for AI development projects, focusing on moral value pluralism and collaborative, inclusive practices. We discuss why it’s important to identify common core values to guide projects and navigate tensions, drawing inspiration from Confucian ethical traditions that promote harmony. The session includes steps for determining relevant values, building consensus, and implementing these values within specific AI initiatives. We aim to promote values-informed AI development by utilizing previous work as case studies, such as the BigScience project and other ethical frameworks from major tech companies. Participants will learn techniques for integrating ethics into their work, ultimately influencing priorities and impacts in AI technology development. Further details can be found at hf.co/spaces/society-ethics/ethical-charter-tutorial.</p> <h3>Should I disclose my dataset? Legal and ethical considerations for researchers dealing with court documents</h3> <i>Raysa Benatti, University of Tübingen</i> <p>Natural language processing techniques have helped domain experts solve problems in different domains. In the legal realm, digital availability of court documents increases possibilities for researchers, who can access them as a source for building datasets --- whose disclosure is aligned with good reproducibility practices in computational research. Large and digitized court systems, such as the Brazilian one, are prone to be explored in that sense. However, personal data protection laws impose restrictions on data exposure and state principles about which researchers should be mindful. Special caution must be taken in cases with human rights violations, such as gender discrimination, over which we elaborate as an example of interest. In this tutorial, we present legal and ethical considerations on the issue, as well as guidelines for researchers dealing with this kind of data and deciding whether --- and to which extent --- to disclose it.</p> <h3>Documenting AI’s Environmental Impact</h3> <b>Practice Tutorial</b> <i>Bran Knowles, Lancaster University; David Piorkowski, IBM T.J. Watson Research; John T. Richards, IBM T.J. Watson Research</i> <p>This tutorial aims to facilitate a much needed conversation on AI’s environmental impacts, and to inspire reflection on how the kinds of documentation that has been developed to support accountability for the myriad social harms of interest to the FAccT community could be expanded to account for environmental harms. In this tutorial, attendees will gain a deeper understanding of the environmental impacts of AI and limitations of existing approaches to mitigating these impacts. They will also gain insights into emerging environmental accountability practice through realworld examples including the use of IBM’s AI FactSheets to capture energy savings from hardware-aware AI models and the calculation of energy consumption and carbon emissions for IBM’s Granite model. </p> <h3>EDIA Demo: Bias assessment for experts in discrimination, not in computer science</h3> <i>Guido Ivetta, Universidad Nacional de Córdoba, Argentina; Luciana Benotti, CONICET, Argentina; Nair Mazzeo, Fundación Via Libre; Hernán Maina, CONICET, Argentina; Laura Alonso Alemany, Universidad Nacional de Córdoba, Argentina; Beatriz Busaniche, Fundación Via Libre; Alexia Halvorsen, Fundación Via Libre</i> <p>Methodologies for bias assessment usually require such technical skills that, by design, discrimination experts are left out. In this demo we present EDIA, a graphical interactive tool that facilitates that experts in discrimination explore social biases in word embeddings and language models. Experts can then characterize those biases so that their presence can be assessed more systematically, and actions can be planned to address them. They can work interactively to assess the effects of different characterizations of bias in a given word embedding or language model, which helps to specify informal intuitions in concrete resources for systematic testing. This 1.5 hour session will first showcase a demo of the tool, conveying the lessons learned from diverse hands-on workshops we have carried out. Then participants will be given time to use the tool themselves and try out some examples we present or others they may bring. Then, there will be an open discussion to clarify nuanced aspects of the problems and methods, share insights, and answer questions. Through this demo session, we aim to provide experts, specially non-technical people, with skills to assess biases in these pervasive language technology constructs, language models and word embeddings. EDIA was designed for non extractive stereotype data collection which is situated in a particular cultural context. Our approach is to lower technical barriers, so that actual discrimination experts in any culture can have intuitive access to bias metrics on language models in their language, interacting with models in a graphical way. We want to share it with the community so that they can reuse it for their cultural contexts. We will offer EDIA, which is an open source tool, to those researchers interested in using it for their own research. EDIA is easily adaptable to any language, as the specific language model and word embeddings to be explored are one of the parameters of the tool. Moreover, we hope to gather feedback that will enhance the tool's effectiveness and user-friendliness. The insights of participants will play a crucial role in refining EDIA and ensuring its relevance in diverse research contexts. Find the tool <a href="https://huggingface.co/spaces/vialibre/edia">here</a></p> <h3> Translating Lessons from 100 Years of Safety Engineering to Responsible ML Development</h3> <i>Shalaleh Rismani, McGill University; Roel Dobbe, TU Delft; AJung Moon, McGill University </i> <p>Identifying, assessing, and mitigating emerging harms from ML systems is challenging. In this tutorial, we reflect on the mature discipline of safety engineering and examine the frameworks, practices, and organizational culture needed to build safe systems. Using examples and case studies, we highlight and focus on relevant lessons, tools, and frameworks from system safety for responsible ML development. We conclude the tutorial with an open discussion and invitation to reflect on the efficacy of system safety approaches for examining ML-based products/services.</p> <h3>How to Anticipate Generative A.I.’s Impacts on Children’s Rights</h3> <i>Hye Jung Han, Human Rights Watch</i> <p> Children are amongst the earliest adopters of generative A.I., though there has not yet been an examination of how the technology may impact them. Threats specific to children have already emerged, raising the stakes for anticipating and mitigating current and likely future harms. This tutorial will offer a crash course on children’s rights; how international human rights law entitles all children to specific protections for their safety, privacy, education, and identity, among others; and how these rights interact with the digital world. Participants will be guided through real-world examples to identify and assess how the development and use of generative A.I. may place multiple rights at risk, and how these impacts may be amplified by contextual factors that have historically resulted in groups of children facing discrimination and exclusion. The tutorial will conclude with an open discussion on how these risks might be mitigated. </p> <h3>LLM Agents: Prospects and Impacts </h3> <i>Seth Lazar, Daniel Kilov, Australian National University; Aaron Snoswell, Queensland University of Technology; Dylan Hadfield-Menell, MIT</i> <p>Large Language Models like OpenAI’s GPT-4 and Google’sGemini are likely to have greater social impacts as the executive centre for complex systems that integrate additional tools for both learning about the world and acting on it.All of the leading AI research labs, and many upstarts, are now investing vast amounts of resources in making LLM agents work, releasing new models optimised for tool-use (e.g. from https://www.youtube.com/watch?v=zjkBMFhNj_g. Adept, Cohere), and software agents like Cognition's 'Devin', designed to go beyond co-piloting to independently undertaking complex coding tasks. This tutorial will offer the FAccT community a technical and philosophical introduction to LLM agents, explaining how they work, their limitations and their potential societal impact, then exploring that impact through the lens of moral and political philosophy.</p></p> <h3>Navigating Equity and Reflexive Practices in Gigwork Design---A Journey Mapping Experience</h3> <b>Translation/Dialogue Tutorial</b> <i>Alicia Boyd, New York University; Danielle Cummings, Department of Defense; Angie Zhang, University of Texas-Austin</i> <p>How do we create ethical and equitable experiences on global platforms? How might UX designers and developers incorporate reflexive practices--a continuous self-evaluation of one's assumptions and biases--to mitigate assumptions and workers' experience? This tutorial will explore ways to build equitable user experiences using gig work platforms as a target use case. With the rise of gig work platforms, the informal digital economy has altered how algorithmic systems manage occasional workers; its questionable assumptions have spread worldwide. Concerns over autonomy, gamification, and worker privacy and safety are amplified as these practices expand worldwide. We will practice reflexive techniques within this context by implementing an equity-focused journey-mapping experience. Journey mapping allows designers to map out the customer experience and identify potential pain points at each step that could hinder the user experience. Using a ride-sharing scenario, participants will be guided through a custom journey map highlighting equitable considerations that can facilitate responsible user experience innovation. More information can be found at the Navigating Equity and Reflexive Practices in Gigwork Design website.</p> <h3>Risks of General-Purpose LLMs for Settling Newcomers in Canada</h3> <b>Implications Tutorial</b> <i>Isar Nejadgholi, National Research Council Canada; Maryam Molamohammadi, Mila - Quebec Artificial Intelligence Institute; Samir Bakhtawar, Immigration, Refugees and Citizenship Canada</i> <p>While AI has been frequently applied in the context of immigration, most of these applications primarily focus on selection and screening processes, which have raised concerns due to their understudied reliability and high impact on people's quality of life. In this tutorial, we focus on Canada’s immigration settlement phase, highlighting that this stage of immigration is information-heavy, and its service providers are overburdened. With concrete examples, we highlight how new immigrants and refugees might become overly dependent on and vulnerable to the extensive use of generic chatbots such as ChatGPT and raise awareness about the challenges and implications of over-reliance on such technologies. Based on the demonstrated evidence, we suggest that the settlement sector is a prime candidate for the adoption of human-centered AI applications, yet it remains under-explored in AI research. The tutorial provides recommendations and guidelines for further research on the development of AI literacy programs and the participatory design of AI tools for the newcomer community in Canada.</p> <h3>Responsible AI in the Generative Era: Science and Practice</h3> <i>Alicia Sagae, Amazon AWS AI/ML; Nil-Jana Akpinar, Amazon AWS AI/ML; Riccardo Fogliato, Amazon AWS AI/ML; Mia Mayer, Amazon AWS AI/ML; Michael Kearns, University of Pennsylvania &Amazon AWS AI/ML</i> <p>Generative AI brings additional nuance to the challenges of Responsible AI (RAI). These challenges include some that were common before generative AI, such as bias and explainability, and some that are unique to generative models, including hallucination, toxicity, and intellectual property protection. This tutorial is structured around hands-on exercises that let participants engage with large language models and with each other, to explore specific strategies that they can apply in their own RAI work. We will compare the challenges of traditional vs. generative RAI, and align those challenges with best practices for (and by) industry practitioners to assess and minimize RAI risk. Together with participants, we will test RAI guardrails in a jailbreaking game and conduct a structured risk assessment for a realistic use case, automatic generation of product descriptions.</p> <h3> Not Just Metrics: Qualitative Evaluations for Geo-Cultural Representation in Generative AI Technologies</h3> <i>Rida Qadri, Google Research ; Sunipa Dev, Google Research ; Remi Denton, Google Research; Mark Diaz, Google Research; Aida Davani, Google Research</i> <p></p> <h3> The Watchdog and the Government Exploring Transparency Models for Public Sector Algorithms</h3> <b>Translation Tutorial</b> <i>Gabriel Geiger (Lighthouse Reports), Justin-Casimir Braun (Lighthouse Reports), Soizic Penicaud (Independent Researcher and former member of Etalab, French Department for Data Policy), Romina Garrido (Deputy Director, GobLab UAI, Chile), dr. Anne Schuth (AI Validation Team, Dutch Ministry of the Interior and Kingdom Relations)</i> <p>A growing club of nations is turning to predictive technology to streamline public services. Machine learning algorithms increasingly make life-changing decisions about people’s lives in criminal justice, health and welfare systems. Yet governments often keep these systems under lock and key. Public watchdogs have struggled to keep up with this growing trend and obtain access to technical materials such as code, model files and training data that would allow them to test claims of fairness and hold governments accountable for faulty systems. In the past few years, local and national governments — often in partnership with academia or civil society — have begun putting in place regulation and governance mechanisms to increase the transparency of decision-making algorithms. These changes have the potential to increase public trust, anticipate and correct the harms caused by these systems, and reformulate traditionally antagonistic relationships between watchdogs and governments. But only if they manage to overcome tangled and at times even contradictory priorities between these parties. This tutorial will bring together watchdogs and government agencies to discuss these hurdles and possible solutions. A roster of investigative journalists, government representatives and civil society from Chile, the Netherlands, and France will present real-world transparency models ranging from algorithm registers to regulatory changes to synthetic data access regimes. </p> <h3>Environmental Justice Beyond Carbon and Towards Consent</h3> <b>Translation/Dialogue Tutorial</b> <i>Tamara Kneese, Data & Society Research Institute, USA; Lori Regattieri, Pan-Amazonian Technopolitical Coalition, Brazil; Bogdana Rakova, Speculative Friction Initiative, USA; Ray Alves, Amazon Environmental Research Institute, Brazil; Martha Fellows, Amazon Environmental Research Institute, Brazil; Valderli Piontekowski, Amazon Environmental Research Institute, Brazil</i> <p>The environmental and climate impacts of AI extend beyond decarbonization, necessitating a broader perspective on environmental justice. While companies and countries report GHG emissions and ESG metrics to meet Net Zero and UN Sustainable Development goals, this tutorial highlights unfair labor practices in mining and manufacturing, pollution affecting land, air, and water, the resource demands of data centers, and downstream health impacts like cancer and respiratory illnesses. These issues are rooted in long-standing histories of colonialism and extraction. Given FAccT's location in Rio de Janeiro this year, we emphasize the importance of Brazilian perspectives, particularly through Brazilian organizations employing community-led and Indigenous participatory methods. We spotlight the System for Observation and Monitoring of the Indigenous Amazon (SOMAI), an online platform developed by the Amazon Environmental Research Institute (IPAM). SOMAI aims to strengthen the role of Indigenous territories in maintaining climate balance. This tutorial explores data trust for AI environmental modeling through principles of Free, Prior, and Informed Consent (FPIC). IPAM’s initiative serves as a model for integrating advanced data stewardship with FPIC, enhancing Indigenous autonomy over land management in the Brazilian Amazon Basin. Our discussion will address the broader implications of AI, data stewardship, and consent in creating a collaborative, trustworthy framework for environmental justice beyond carbon footprinting. For more information, visit the SOMAI platform at https://somai.org.br/.</p> <h3>Developing Gen AI in the Global South; debating five practical cases and their challenges </h3> <i>Rachel Adams - African Observatory on Responsible AI; Clemence Kyara - Code for Africa; Fola Adeleke . - University of the Witwatersrand; Aarushi Gupta. Digital Futures Lab; Christian Perrone - ITS Rio </i> <p>The wide expansion and popularity of Generative Artificial Intelligence technologies (GenAIs) has had a wide impact in many different sectors worldwide, from education to health. The promise of these technologies is often overshadowed by significant challenges specific from local realities in the Global South, where limitations in data accessibility, computational infrastructure, and clear governance frameworks frequently curtail the research, development and implementation of AI systems This tutorial is based on a wide project that has supported more thanfifty specific initiatives throughout the Global South aiming to debate and foster responsible development of these technologies. This tutorial will discuss in practice five specific cases highlighting challenges around development of the technologies, testing different Large Language Models (LLMs) and applying to particular specific contexts. We will explore five practical cases of Gen AI applications, selected based on their diversity of application areas, geographical representation, and the distinct challenges they illustrate. The case studies will encompass sectors such as healthcare, educational content generation methodologies, language processing for underrepresented languages, and ethical AI development frameworks.</p> <h3>Algorithm Auditing and Generative AI</h3> <i>Danaë Metaxa, UPenn; Leon Yin, Bloomberg News; Sarah Cen, MIT; Leonardo Nicoletti, Bloomberg News; Piotr Sapieżyński, Northeastern</i> <p>With industry's rapid adoption of generative AI tools, journalists and researchers are developing accountability techniques to measure these tools’ outputs and understand their potential social impacts. This panel will bring together experts from academia and journalism to discuss two sides of algorithm audits and generative AI: first, audits of generative AI technologies themselves, and second, uses of generative AI to help conduct audits of other technologies.</p> <h3>How to Conduct Human Rights Assessments of AI: Methodology and Comparison to Other Assessment Frameworks</h3> <i>Lindsey Andersen, BSR; Hannah Darnton, BSR; Betsy Popken, UC Berkeley</i> <p>Human rights assessments (HRAs) are a well-established approach to assessing risks to people and society. They are a core part of the responsibilities of companies under the UN Guiding Principles on Business and Human Rights, and thus companies have been conducting them for years—including for AI products and services. However, there are no examples of HRAs for AI that are published in full, meaning the knowledge of how to conduct them and what they look like in practice largely lives with the companies and consultants who conduct them. This tutorial will teach participants about what HRAs are and how they are useful for responsible AI practitioners. It will include a discussion of the benefits and limitations of HRAs, how they are similar or different to other types of assessments and audits, and how they can be helpfully integrated into common responsible AI practices. The tutorial will also include a step-by-step walkthrough of HRA methodology, which participants will practice applying using an example of a hypothetical AI product.</p> <h3>What is Sociotechnical AI Safety? A participatory workshop about defining and expanding responses to sociotechnical risk in AI Safety </h3> <b>Dialogue/Implications Tutorial</b> <i>Andrew Smart, Google Research; Shazeda Ahmed, UCLA; Jake Metcalf, Data & Society; Atoosa Kasirzadeh, CMU, Google Research; Luca Belli, UC Berkeley; Shalaleh Rismani, McGill; Roel Dobbe, TU Delft; Abbie Jacobs, U Michigan; Joshua A. Kroll, NPS; Donald Martin Jr,, Google Research; Renee Shelby, Google Research; Heidy Khlaaf, BSI; Genevieve Smith, UC Berkeley</i> <p>Our goal is to invite discussion and critique of the currently dominant ideas around AI safety, and to shed light on alternative research. The purpose of this Tutorial session is to give space to well established research fields such as systems safety engineering, sociotechnical work in labor studies, that have received less attention than work on alignment or the control of existential risks. At the same time, the session aims to critique and expand the current understanding of AI Safety in order to offer a path forward for research and practice that centers equity, participatory approaches, expanding the kinds of expertise that are relevant, and community inclusion. This research program focuses on current, actual, societal harms from the development and deployment of AI systems, and adapts safety and systems science and engineering approaches to the problem of mitigating risk from these systems, relating existing and emerging technical tools to sociotechnical risks in structured and scientific ways. These approaches are in turn informed by critical social science research so that a synthesis between societal understanding and organizational/technical risk mitigation actually reduces harm to society. Finally, this research program sees the problem of AI Safety not as a technical or mathematical problem, but rather as a social, organizational, political and cultural problem in guiding the development and use of technology. This problem takes on particular urgency as policy responses such as the creation of the U.S. AI Safety Institute and the passage of the EU AI Act demand operationalizing AI Safety in ways that capture sociotechnical risks. </p> </div> </div> </div> <footer class="footer navbar-inverted"> <div class="container"> <div class="row"> <div class="col-lg-6"> Sponsored by: <img src="/static/images/acm_logo_tablet.svg" alt="Association of Computing Machinery"> </div> <div class="col-lg-6 text-right"> <p>CC-BY 2024 ACM FAccT Conference.</p> <p>Updated Fri, 15 Nov 2024 09:28:04 +0000.</p> </div> </div> </div> </footer> <script src="/static/js/vendor/jquery-1.10.1.js"></script> <script src="/static/js/vendor/bootstrap.min.js"></script> <script src="/static/js/main.js"></script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10