CINXE.COM
ACM FAccT - 2023 Accepted Tutorials
<!DOCTYPE html> <html class="no-js"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title>ACM FAccT - 2023 Accepted Tutorials</title> <meta name="description" content=""> <meta name="viewport" content="width=device-width"> <!--<link rel="stylesheet" href="/static/css/bootstrap.css"> <link rel="stylesheet" href="/static/css/bootstrap-theme.css">--> <link rel="stylesheet" href="/static/css/bootstrap-spacelab.min.css"> <link rel="stylesheet" href="/static/css/main.css"> <script src="/static/js/vendor/modernizr-2.6.2.js"></script> </head> <body> <div class="navbar navbar-default navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse"> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class='navbar-brand' href='/'>ACM FAccT Conference</a> </div> <div class="navbar-collapse collapse"> <ul class="nav navbar-nav"> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2025 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2025/'>Home</a></li> <li class="divider"></li> <li class=""><a href='/2025/cfp'>Call for Papers</a></li> <li class="divider"></li> <li class=""><a href='/2025/aguide'>Author Guide</a></li> <li class=""><a href='/2025/rguide'>Reviewer Guide</a></li> <li class=""><a href='/2025/acguide'>AC Guide</a></li> <li class=""><a href='/2025/rform'>Sample Reviewer Form</a></li> <li class=""><a href='/2025/acform'>Sample AC Form</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2024 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2024/schedule'>Conference schedule</a></li> <li class=""><a href='/2024/acceptedcraft'>Accepted CRAFTs</a></li> <li class=""><a href='/2024/acceptedpapers'>Accepted Papers</a></li> <li class=""><a href='/2024/acceptedtutorials'>Accepted Tutorials</a></li> <li class=""><a href='/2024/acceptedces'>Accepted Community Empowerment Socials</a></li> <li class="divider"></li> <li class=""><a href='/2024/'>Home</a></li> <li class=""><a href='/2024/cfp'>Call for Papers</a></li> <li class=""><a href='/2024/callfordc'>Doctoral Colloquium Call for Applications</a></li> <li class=""><a href='/2024/cft'>Call for Tutorials</a></li> <li class=""><a href='/2024/cfpcraft'>Call for CRAFT</a></li> <li class=""><a href='/2024/cfces'>Call for Community Empowerment Socials</a></li> <li class=""><a href='/2024/cfv'>Call for Volunteers</a></li> <li class="divider"></li> <li class=""><a href='/2024/dei'>Diversity and Inclusion Programs</a></li> <li class=""><a href='/2024/deischolars'>DEI Scholars</a></li> <li class=""><a href='/2024/deisupport'>Participant Support</a></li> <li class=""><a href='/2024/community'>Community Agreements</a></li> <li class="divider"></li> <li class=""><a href='/2024/scholarships'>Financial Support</a></li> <li class=""><a href='/2024/sponsors'>Sponsors and Supporters</a></li> <li class=""><a href='/2024/committees'>Committees</a></li> <li class="divider"></li> <li class=""><a href='/2024/visainf'>Visa Information</a></li> <!-- add a new item conference registration and link to https://cvent.me/xBrvwq --> <li><a href="https://cvent.me/xBrvwq">Conference Registration</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2023 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2023/'>Home</a></li> <li class=""><a href='/2023/registration-archive'>Registration and Financial Support (closed)</a></li> <li class=""><a href='/2023/harm-policy'>Statement on AI Harms and Policy</a></li> <li class=""><a href="https://form.typeform.com/to/aL59wmNs">Conference Survey</a></li> <li class="divider"></li> <li class=""><a href='/2023/schedule'>Program Schedule</a></li> <li class=""><a href='/2023/keynotes'>Keynote Speakers</a></li> <li class=""><a href='/2023/acceptedpapers'>Accepted Papers</a></li> <li class="active"><a href='/2023/acceptedtuts'>Accepted Tutorials</a></li> <li class=""><a href='/2023/acceptedcraft'>Accepted CRAFTs</a></li> <li class=""><a href='/2023/closedcalls'>Closed Calls</a></li> <li class="divider"></li> <li class=""><a href='/2023/del'>Diversity and Inclusion Programs</a></li> <li class=""><a href='/2023/deischolars'>DEI Scholars</a></li> <li class=""><a href='/2023/community'>Community Agreements</a></li> <li class="divider"></li> <li class=""><a href='/2023/sponsorship_policy'>Sponsorship Policy</a></li> <li class=""><a href='/2023/sponsors'>Sponsors and Supporters</a></li> <li class=""><a href='/2023/funding_sources_disclosure'>Disclosure of Funding Sources</a></li> <li class="divider"></li> <li class=""><a href='/2023/committees'>Committees</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2022 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2022/'>Home</a></li> <li class=""><a href='/2022/guide'>Conference Guide</a></li> <li class=""><a href='/2022/schedule'>Program Schedule</a></li> <li class=""><a href='/2022/keynotes'>Keynotes</a></li> <li class=""><a href='/2022/acceptedpapers'>Papers</a></li> <li class=""><a href='/2022/prizes'>Paper prizes</a></li> <li class=""><a href='/2022/acceptedcraft'>CRAFT Sessions</a></li> <li class=""><a href='/2022/acceptedtuts'>Tutorials</a></li> <li class=""><a href='/2022/dcpart'>Doctoral Consortium</a></li> <li class=""><a href='/2022/visas'>Venue, Travel, and COVID Information</a></li> <li class=""><a href='/2022/registration'>Registration</a></li> <li class=""><a href='/2022/closedcalls'>Closed Calls</a></li> <li class=""><a href='/2022/scholarships'>Financial Support</a></li> <li class="divider"></li> <li class=""><a href='/2022/deischolars'>DEI Scholars</a></li> <li class=""><a href='/2022/outputs'>CRAFT Outputs</a></li> <li class=""><a href='/2022/authors'>Instructions for Authors</a></li> <li class=""><a href='/2022/merch'>Conference Memorabilia</a></li> <li class=""><a href='/2022/faq'>FAQ</a></li> <li class="divider"></li> <li class=""><a href='/2022/committees'>Committees</a></li> <li class=""><a href='/2022/inclusion'>Diversity and Inclusion</a></li> <li class=""><a href='/2022/community'>Community Agreements</a></li> <li class=""><a href='/2022/sponsorship_policy'>Sponsorship Policy</a></li> <li class=""><a href='/2022/sponsors'>Sponsors and Supporters</a></li> <li class=""><a href='/2022/funding_sources_disclosure'>Disclosure of Funding Sources</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2021 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2021/'>Home</a></li> <li class="{'% if_current_page 'https://2021.facctconference.org/ 'active' '' %}"><a href="https://2021.facctconference.org/">Online conference hub</a></li> <li class=""><a href='/2021/welcome'>Welcome from the General Chairs</a></li> <li class=""><a href='/2021/community'>Community Agreements</a></li> <li class=""><a href='/2021/inclusion'>Diversity and inclusion</a></li> <li class=""><a href='/2021/registration'>Registration</a></li> <li class="divider"></li> <li class=""><a href='/2021/programschedule'>Program Schedule</a></li> <li class=""><a href='/2021/keynotes'>Keynote Speakers</a></li> <li class=""><a href='/2021/acceptedpapers'>Accepted Papers</a></li> <li class=""><a href='/2021/acceptedtuts'>Accepted Tutorials</a></li> <li class=""><a href='/2021/acceptedcraftsessions'>Accepted CRAFT Sessions</a></li> <li class="divider"></li> <li class=""><a href='/2021/cfp'>Call for Papers (Closed)</a></li> <li class=""><a href='/2021/cft'>Call for Tutorials (Closed)</a></li> <li class=""><a href='/2021/cfw'>Call for CRAFT proposals (Closed)</a></li> <li class=""><a href='/2021/callfordc'>Call for Doctoral Consortium (Closed)</a> <li class=""><a href='/2021/callforvolunteers'>Call for Volunteers (Closed)</a></li> <li class="divider"></li> <li class=""><a href='/2021/committees'>Committees</a></li> <li class=""><a href='/2021/sponsorship'>Sponsors and Supporters</a></li> <li class=""><a href='/2021/sponsorship_policy'>Sponsorship Policy</a></li> <li class="divider"></li> <li class=""><a href='/2021/press-release'>ACM Press release</a></li> <li class=""><a href='/2021/racialequityjustice'>Commitment to Racial Equity + Justice</a></li> <li class=""><a href='/2021/2021_online'>Covid-19 Update</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2018-2020 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2020/'>FAT*2020 Barcelona</a></li> <li class=""><a href='/2019/'>FAT*2019 Atlanta</a></li> <li class=""><a href='/2018/'>FAT*2018 New York</a></li> </ul> </li> </ul> <ul class="nav navbar-nav navbar-right"> <!--<li class=""><a href="/index.html">Home</a></li>--> <li><a href="https://facct-blog.github.io/">Blog</a></li> <li class=""><a href="/network/">Network</a></li> <li class=""><a href='/connect'>Connect</a> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Organization <b class="caret"></b></a> <ul class="dropdown-menu"> <!-- <li class=""><a href="/nominate.html">Open Call for SC Nominations</a></li> --> <li class="divider"></li> <li class=""><a href='/organization'>People & Committees</a></li> <li class=""><a href='/organization-sc'>Steering Committee</a></li> <li class=""><a href='/documents'>Governing Documents</a></li> <li class=""><a href='/harassment'>Anti-Discrimination & Harassment Policy</a></li> <li class=""><a href='/sponsorship'>Sponsorship Policy</a></li> <li class=""><a href='/faq'>FAQ</a></li> <li class="divider"></li> <li class=""><a href='/2023/humanrights'>EC Statement on Technology and Human Rights</a></li> <li class=""><a href='/warfare'>EC Statement on AI Warfare </a></li> </ul> </li> </ul> </div> <!--/.navbar-collapse --> </div> </div> <div class="container"> <div class="page-header"> <h1>ACM FAccT 2023 Tutorials</h1> </div></div> <div class="container"> <div class="row"> <div class="col-lg-12"> <br> <p>The goal of tutorials is to broaden the perspective of our interdisciplinary community, addressing practical, technical, policy, regulatory, ethical, or societal issues related to FAccT. We solicited three types of tutorials: <em>Translation Tutorials</em> to foster dialogue between disciplines, <em>Implications Tutorials</em> to describe effects of algorithmic systems in society, and <em>Practice Tutorials</em> focused on a specific tool or frameworks.</p> <h2><strong>Translation Tutorials</strong></h2> <h3 id="hands">A Hands-On Introduction to Large Language Models for Fairness, Accountability, and Transparency Researchers</h3> <p><i>Maria Antoniak* (Allen Institute for AI), Melanie Walsh* (University of Washington), Luca Soldaini* (Allen Institute for AI), David Mimno* (Cornell University), and Matthew Wilkens (Cornell University)</i></p> <p>This tutorial will offer a hands-on, technical introduction to large language models (LLMs) for fairness, accountability, and transparency researchers who might have less familiarity with the inner workings of these models but who are interested in exploring, auditing, or anticipating their capabilities. We will focus on building practical knowledge of (a) how these models work and how they are trained and (b) how practitioners can work with these models, via hands-on, accessible coding tutorials. Throughout the tutorial, we will focus on potential use cases that we believe are of particular interest, such as measuring biases (e.g., between vector representations, in generative outputs), analyzing training data and output coverage and attribution, and examining outputs for private information and toxicity. We will also discuss particular ways that FAccT researchers can contribute to improving the design and release of these models. Tutorial materials can be found at <a href="https://www.bertforhumanists.org/" target="_blank">www.bertforhumanists.org</a>.</p> <h3 id="steer">Steering Language Models with Reinforcement Learning from Human Feedback and Constitutional AI</h3> <p><i>Amanda Askell (Anthropic), Deep Ganguli* (Anthropic), and Nathan Lambert* (Hugging Face)</i></p> <p>Reinforcement learning from human feedback (RLHF) is a recent technique that has dramatically improved the real-world performance and user experience of large language models – both increasing the helpfulness and actively reducing the harms present in large ML models. Constitutional AI (CAI) is a technique built on RLHF that reduces the amount and variety of human feedback required. In this tutorial, we will provide a high-level overview of RLHF and CAI. We will describe the technical processes and procedures required to make RLHF and CAI work, and lead a discussion on the advantages of RLHF and CAI, as well as current challenges and limitations.</p> <h3 id="pract">Practices and limitations of participatory methods: views from computer science, political science and design</h3> <p><i>Emily Black* (Stanford University), Sofia Bosch Gomez (Northeastern University), and Luisa Godinez-Puig (Urban Institute)</i></p> <p>The machine learning (ML) community has called for greater community engagement in the development and research of public-facing AI systems in response to harmful algorithmic systems. Stakeholder participation is hoped to result in more equitable outcomes in creating less harmful AI systems or more transformative research products. Despite some successes, there remain unknowns regarding the challenges and limitations of using participatory research techniques in the AI lifecycle. This tutorial aims to inform the FAccT community on how participatory methods can be used in the AI lifecycle, while also highlighting the shortcomings of participatory approaches through case studies. Participatory methodologies do not inherently focus on increasing equity; equitable processes must be centered in project design to avoid unintended harmful practices. The tutorial aims to provide participants with a better understanding of the main tenets of participatory methodologies and practical limitations.</p> <h3 id="using2">Using Technical Skills to Fight Actual Public Benefits Cuts and Austerity Policies, with the Benefits Tech Advocacy Hub</h3> <p><i>Emma Weil* (Upturn) and Elizabeth Edwards* (National Health Law Program)</i></p> <p>The U.S. social safety net is designed to determine whether someone is “truly deserving” of assistance: a distinction rooted in racist, ableist, xenophobic, and sexist scrutiny of the autonomy of marginalized people. These determinations are increasingly made by standardized assessments and operationalized within large software systems—which often fail, affecting people at a mass scale. When people cannot get the resources they need, they suffer compounding economic and health consequences, sometimes severely. This tutorial will give technologists an introduction to how they can support on-the-ground challenges to public benefits technology, as well as crucial context about the politics and history of the U.S. social safety net. This “technical assistance” to advocates is a concrete way that members of the FAccT community can use their skills to intervene in government use of technology that denies people access to essential support, while not conceding to austerity logics.</p> <h3 id="contex">Contextualizing AI with Cross-Cultural Perspectives</h3> <p><i>Aida Davani* (Google Research) and Sunipa Dev* (Google Research)</i></p> <p>Training and evaluation of AI models rely heavily on semi-structured data annotated by humans. Both the data and human perspectives involved in the process, thus play a key role in what is taken as ground truth by models. Historically, this perspective has been Western-oriented which leads to a lack of representation of global contexts and identities in models as well as evaluation strategies, and the risk of disregarding marginalized groups that are most significantly affected by implicit harms. Accounting for cross-cultural differences in interacting with technology is an important step for building and evaluating AI holistically. Our work talks through different strategies including participatory approaches, along with survey experiments to capture a more diverse set of perspectives in data curation and benchmarking efforts. We zoom in on what cultural differences can explain human disagreements on language interpretation that inform model evaluations, and how socio-culturally aware AI research can fill in the gaps in fairness evaluations.</p> <h3 id="guidi">A Guiding Framework for Vetting Technology Vendors Operating in the Public Sector</h3> <p><i>Cynthia Conti-Cook (The Ford Foundation), David Liu* (Northeastern University), Roya Pakzad (Taraaz), and Sarah Ariyan Sakha* (independent)</i></p> <p>This tutorial aims to improve the vetting process for technology vendors operating in the public sector, focusing on bridging communication gaps between governments, civil society organizations, philanthropies, and technology vendors. With the growing presence of technology vendors in the public sector, the need for an effective and transparent vetting process is crucial. The tutorial is based on the framework developed by The Ford Foundation in collaboration with Taraaz. It contains a list of red flags across seven categories: theory of change and value proposition; business model and funding; organizational governance, policies, and practices; product design, development, and maintenance; third-party relationships, infrastructure, and supply chain; government relationships; and community engagement. The tutorial's goal is to better equip both funders and vendors in understanding potential technological harms and limitations, while promoting dialogue on human rights, social and economic justice, and democratic values.</p> <h2><strong>Implications Tutorials</strong></h2> <h3 id="gener">Generative AI meets Responsible AI: Practical Challenges and Opportunities</h3> <p><i>Krishnaram Kenthapadi (Fiddler AI), Hima Lakkaraju* (Harvard University), and Nazneen Rajani* (Hugging Face)</i></p> <p>Generative AI models and applications are being rapidly deployed across several industries, but ethical and social considerations that need to be addressed. These include lack of interpretability, bias and discrimination, privacy, lack of model robustness, fake and misleading content, copyright implications, plagiarism, and environmental impact. This tutorial focuses on the need for adopting responsible AI principles when developing and deploying large language models and other generative AI models. It provides a technical overview of text and image generation models, and highlights key responsible AI desiderata associated with these models. The tutorial also provides real-world generative AI use cases and practical solution approaches and guidelines for applying responsible AI techniques effectively. The tutorial concludes by discussing lessons learned and open research problems. We hope that our tutorial will inform researchers and practitioners, stimulate further research on responsible AI in the context of generative AI, and encourage building more reliable and trustworthy generative AI applications in the future. Tutorial materials can be found at <a href="https://sites.google.com/view/responsible-gen-ai-tutorial/">https://sites.google.com/view/responsible-gen-ai-tutorial/.</a></p> <h3>AI Governance and Policy in the US: Spotlight on the Blueprint for an AI Bill of Rights</h3> <p><i>Sorelle Friedler* (Haverford College), Marc Aidinoff* (Institute for Advanced Study)</i></p> <p>US and international policy has increasingly focused on concerns around algorithmically driven harms, especially those relating to artificial intelligence (AI) and algorithmic discrimination. Last fall, the White House released the AI Bill of Rights, which includes principles as well as a technical companion meant to help move policy from principles to practice. In this tutorial, we will discuss cross-cutting questions that policy around AI must address and the concrete steps that the FAccT research community can take to be helpful as policy makers struggle with these complex questions.</p> <h2><strong>Practice Tutorials</strong></h2> <h3 id="findi">Finding and Using Undocumented APIs for Algorithm Audits</h3> <p><i>Leon Yin* (The Markup), Piotr Sapiezynski* (Northeastern University), and Inioluwa Deborah Raji (Mozilla Foundation; University of California Berkeley)</i></p> <p>Data-driven journalism and external algorithm audits rely on rich, purpose-built datasets. Creating such datasets is far from trivial. Despite the promising trend of online platforms creating academic APIs and ad libraries, many pertinent questions cannot be answered with such curated information. Instead, journalists and auditors rely on bespoke tools to gather public data that has yet been synthesized. The skills necessary for this kind of work are seldom taught in traditional coursework. This tutorial is the first step to addressing this gap. We will cover case studies of investigative journalism and algorithm audits that focus on the technical challenges of data collection. We will introduce practical skills that we use everyday in our work, while providing participants with a toolkit that will allow them to identify undocumented APIs in the wild and use them to collect the data relevant to their work. Tutorial materials can be found at <a href="https://inspectelement.org/apis.html">https://inspectelement.org/apis.html.</a></p> <h3 id="using">Using the NIST AI Risk Management Framework</h3> <p><i>Elham Tabassi* (NIST), Reva Schwartz* (NIST), Kathy Baxter (Salesforce), Sina Fazelpour* (Northeastern University), Luca Belli* (NIST), and Patrick Hall* (BNH.AI, GWU)</i></p> <p>AI system risks and resulting negative impacts can emerge for a variety of reasons and do not just stem from challenges with datasets, models, or algorithms. AI systems are built within organizational environments, based on individual and group decisions across enterprises - reflecting a variety of incentives and purposes. The contextual realities of AI system deployment are another contributing source of risk. The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) provides organizations with a guiding structure to operate within, and outcomes to aspire towards, based on their specific contexts, use cases, and skillsets. The rights-affirming framework operationalizes AI system trustworthiness within a culture of responsible AI practice and use. Tutorial attendees will learn how to use the AI RMF to enhance their organizational AI risk management posture, governance mechanisms, operationalize AI system trustworthiness, and bring contextual awareness into their organizational practices.</p> <h3 id="integ">Integrating notions of fairness and demographic variance into large personalized advertising systems</h3> <p><i>Miranda Bogen* (Meta), Sean Gahagan* (Meta), and Aditya Srinivas Timmaraju (Meta)</i></p> <p>Fairness in personalized ads has emerged as an area of significant focus for policymakers, regulators, civil rights groups, industry and other stakeholders. Early efforts to address related concerns focused on preventing potential discrimination by changing how advertisers can use tools to target their ads (especially those offering housing, employment or credit). Over time, concerns have shifted to the potential for exclusion in the machine learning-driven process that platforms often use to decide who within the target audience ultimately sees an ad. This tutorial will begin with a presentation detailing how evolution of concerns about fairness in personalized advertising have played out in a practical setting, gaps and trade-offs between theoretical recommendations and implementation constraints, and a novel approach that was developed to address concerns. The presentation will be followed by reflections from 1-2 discussants and structured breakout conversations in which attendees can reflect on open questions raised by presenters and discussants.</p> <h3 id="whent">When the Rubber Meets the Road: Experience Implementing AI Governance in a Public Agency with the City of San José</h3> <p><i>Albert Gehami* (City of San José) and Leila Doty* (City of San José)</i></p> <p>AI best practices are still developing as governments grapple with how to actually operationalize trustworthy AI principles. The City of San José is a leader among U.S. cities, having developed and applied an AI governance framework that promotes the procurement and deployment of trustworthy AI systems. In this session, practitioners from the City of San José discuss their new AI Review Framework based on existing guidance, their experience piloting the framework, and the practitioner skills that are necessary for its implementation. They identify the elements necessary for AI governance to be successful when “the rubber meets the road.”</p> <h3 id="theor">Theories of Propaganda and New Technology: Applications and Interventions</h3> <p><i>Megan Hyska* (Northwestern University), and Michael Barnes* (Australian National University)</i></p> <p>Innovations in communications technologies have always had consequences for the way that political actors try to influence one another—new technology, in other words, means new forms of propaganda.With the continued expansion of AI/ML technologies into more corners of our lives, it is crucial to understand how these technologies will alter the way that propaganda—and the strategies for mitigating its negative effects—might come to operate. This tutorial will be composed of two substantive mini-lectures followed by a structured discussion. We will begin with an overview of what propaganda is, focussing on theories that emphasize its role in the destruction, formation, and control of group agency. Next, we will demonstrate how this understanding of propaganda illuminates some potentially concerning uses of AI/ML technologies. Finally, our structured discussion will take up questions about how this notion of propaganda suggests different harm-reducing interventions on the parts of developers and regulators.</p> <h3 id="respo">Responsible AI Toolbox</h3> <p><i>Besmira Nushi* (Microsoft Research), Rahee Ghosh Peshawaria* (Microsoft Research), Mehrnoosh Sameki (Microsoft), Minsoo Thigpen* (Microsoft), and Wenxin Wei* (Microsoft)</i></p> <p>Responsible AI Toolbox is an open-source framework for accelerating and operationalizing Responsible AI via a set of interoperable tools, libraries, and customizable dashboards. The toolbox supports the machine learning lifecycle through the stages of identifying, diagnosing, and mitigating Responsible AI concerns and then validating and comparing different mitigation actions. In this tutorial, we will summarize and demonstrate the different tools available to the community today, and illustrate how they can be used altogether for debugging and improving machine learning models in models trained on different data types including structured data, images and text. Through several case studies and user stories, we will share how the tools are being used in practice and describe the main challenges faced during deployment or adoption. Finally, the tutorial will conclude by identifying future opportunities for open collaboration in this space that can enable participatory tool design and implementation. Tutorial materials can be found at <a href="https://www.microsoft.com/en-us/research/uploads/prod/2023/06/responsible_ai_toolbox_facct_tutorial_2023.pdf">https://www.microsoft.com/en-us/research/uploads/prod/2023/06/responsible_ai_toolbox_facct_tutorial_2023.pdf</a>.</p> <div class="alert alert-info" role="alert">An asterisk (*) denotes an in-person tutorial presenter</div> </div> </div> </div> <footer class="footer navbar-inverted"> <div class="container"> <div class="row"> <div class="col-lg-6"> Sponsored by: <img src="/static/images/acm_logo_tablet.svg" alt="Association of Computing Machinery"> </div> <div class="col-lg-6 text-right"> <p>CC-BY 2024 ACM FAccT Conference.</p> <p>Updated Fri, 15 Nov 2024 09:28:12 +0000.</p> </div> </div> </div> </footer> <script src="/static/js/vendor/jquery-1.10.1.js"></script> <script src="/static/js/vendor/bootstrap.min.js"></script> <script src="/static/js/main.js"></script> </body> </html>