CINXE.COM
ACM FAccT - 2022 Keynote Speakers
<!DOCTYPE html> <html class="no-js"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title>ACM FAccT - 2022 Keynote Speakers </title> <meta name="description" content=""> <meta name="viewport" content="width=device-width"> <!--<link rel="stylesheet" href="/static/css/bootstrap.css"> <link rel="stylesheet" href="/static/css/bootstrap-theme.css">--> <link rel="stylesheet" href="/static/css/bootstrap-spacelab.min.css"> <link rel="stylesheet" href="/static/css/main.css"> <script src="/static/js/vendor/modernizr-2.6.2.js"></script> </head> <body> <div class="navbar navbar-default navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse"> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class='navbar-brand' href='/'>ACM FAccT Conference</a> </div> <div class="navbar-collapse collapse"> <ul class="nav navbar-nav"> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2025 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2025/'>Home</a></li> <li class="divider"></li> <li class=""><a href='/2025/cfp'>Call for Papers</a></li> <li class="divider"></li> <li class=""><a href='/2025/aguide'>Author Guide</a></li> <li class=""><a href='/2025/rguide'>Reviewer Guide</a></li> <li class=""><a href='/2025/acguide'>AC Guide</a></li> <li class=""><a href='/2025/rform'>Sample Reviewer Form</a></li> <li class=""><a href='/2025/acform'>Sample AC Form</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2024 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2024/schedule'>Conference schedule</a></li> <li class=""><a href='/2024/acceptedcraft'>Accepted CRAFTs</a></li> <li class=""><a href='/2024/acceptedpapers'>Accepted Papers</a></li> <li class=""><a href='/2024/acceptedtutorials'>Accepted Tutorials</a></li> <li class=""><a href='/2024/acceptedces'>Accepted Community Empowerment Socials</a></li> <li class="divider"></li> <li class=""><a href='/2024/'>Home</a></li> <li class=""><a href='/2024/cfp'>Call for Papers</a></li> <li class=""><a href='/2024/callfordc'>Doctoral Colloquium Call for Applications</a></li> <li class=""><a href='/2024/cft'>Call for Tutorials</a></li> <li class=""><a href='/2024/cfpcraft'>Call for CRAFT</a></li> <li class=""><a href='/2024/cfces'>Call for Community Empowerment Socials</a></li> <li class=""><a href='/2024/cfv'>Call for Volunteers</a></li> <li class="divider"></li> <li class=""><a href='/2024/dei'>Diversity and Inclusion Programs</a></li> <li class=""><a href='/2024/deischolars'>DEI Scholars</a></li> <li class=""><a href='/2024/deisupport'>Participant Support</a></li> <li class=""><a href='/2024/community'>Community Agreements</a></li> <li class="divider"></li> <li class=""><a href='/2024/scholarships'>Financial Support</a></li> <li class=""><a href='/2024/sponsors'>Sponsors and Supporters</a></li> <li class=""><a href='/2024/committees'>Committees</a></li> <li class="divider"></li> <li class=""><a href='/2024/visainf'>Visa Information</a></li> <!-- add a new item conference registration and link to https://cvent.me/xBrvwq --> <li><a href="https://cvent.me/xBrvwq">Conference Registration</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2023 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2023/'>Home</a></li> <li class=""><a href='/2023/registration-archive'>Registration and Financial Support (closed)</a></li> <li class=""><a href='/2023/harm-policy'>Statement on AI Harms and Policy</a></li> <li class=""><a href="https://form.typeform.com/to/aL59wmNs">Conference Survey</a></li> <li class="divider"></li> <li class=""><a href='/2023/schedule'>Program Schedule</a></li> <li class=""><a href='/2023/keynotes'>Keynote Speakers</a></li> <li class=""><a href='/2023/acceptedpapers'>Accepted Papers</a></li> <li class=""><a href='/2023/acceptedtuts'>Accepted Tutorials</a></li> <li class=""><a href='/2023/acceptedcraft'>Accepted CRAFTs</a></li> <li class=""><a href='/2023/closedcalls'>Closed Calls</a></li> <li class="divider"></li> <li class=""><a href='/2023/del'>Diversity and Inclusion Programs</a></li> <li class=""><a href='/2023/deischolars'>DEI Scholars</a></li> <li class=""><a href='/2023/community'>Community Agreements</a></li> <li class="divider"></li> <li class=""><a href='/2023/sponsorship_policy'>Sponsorship Policy</a></li> <li class=""><a href='/2023/sponsors'>Sponsors and Supporters</a></li> <li class=""><a href='/2023/funding_sources_disclosure'>Disclosure of Funding Sources</a></li> <li class="divider"></li> <li class=""><a href='/2023/committees'>Committees</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2022 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2022/'>Home</a></li> <li class=""><a href='/2022/guide'>Conference Guide</a></li> <li class=""><a href='/2022/schedule'>Program Schedule</a></li> <li class="active"><a href='/2022/keynotes'>Keynotes</a></li> <li class=""><a href='/2022/acceptedpapers'>Papers</a></li> <li class=""><a href='/2022/prizes'>Paper prizes</a></li> <li class=""><a href='/2022/acceptedcraft'>CRAFT Sessions</a></li> <li class=""><a href='/2022/acceptedtuts'>Tutorials</a></li> <li class=""><a href='/2022/dcpart'>Doctoral Consortium</a></li> <li class=""><a href='/2022/visas'>Venue, Travel, and COVID Information</a></li> <li class=""><a href='/2022/registration'>Registration</a></li> <li class=""><a href='/2022/closedcalls'>Closed Calls</a></li> <li class=""><a href='/2022/scholarships'>Financial Support</a></li> <li class="divider"></li> <li class=""><a href='/2022/deischolars'>DEI Scholars</a></li> <li class=""><a href='/2022/outputs'>CRAFT Outputs</a></li> <li class=""><a href='/2022/authors'>Instructions for Authors</a></li> <li class=""><a href='/2022/merch'>Conference Memorabilia</a></li> <li class=""><a href='/2022/faq'>FAQ</a></li> <li class="divider"></li> <li class=""><a href='/2022/committees'>Committees</a></li> <li class=""><a href='/2022/inclusion'>Diversity and Inclusion</a></li> <li class=""><a href='/2022/community'>Community Agreements</a></li> <li class=""><a href='/2022/sponsorship_policy'>Sponsorship Policy</a></li> <li class=""><a href='/2022/sponsors'>Sponsors and Supporters</a></li> <li class=""><a href='/2022/funding_sources_disclosure'>Disclosure of Funding Sources</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2021 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2021/'>Home</a></li> <li class="{'% if_current_page 'https://2021.facctconference.org/ 'active' '' %}"><a href="https://2021.facctconference.org/">Online conference hub</a></li> <li class=""><a href='/2021/welcome'>Welcome from the General Chairs</a></li> <li class=""><a href='/2021/community'>Community Agreements</a></li> <li class=""><a href='/2021/inclusion'>Diversity and inclusion</a></li> <li class=""><a href='/2021/registration'>Registration</a></li> <li class="divider"></li> <li class=""><a href='/2021/programschedule'>Program Schedule</a></li> <li class=""><a href='/2021/keynotes'>Keynote Speakers</a></li> <li class=""><a href='/2021/acceptedpapers'>Accepted Papers</a></li> <li class=""><a href='/2021/acceptedtuts'>Accepted Tutorials</a></li> <li class=""><a href='/2021/acceptedcraftsessions'>Accepted CRAFT Sessions</a></li> <li class="divider"></li> <li class=""><a href='/2021/cfp'>Call for Papers (Closed)</a></li> <li class=""><a href='/2021/cft'>Call for Tutorials (Closed)</a></li> <li class=""><a href='/2021/cfw'>Call for CRAFT proposals (Closed)</a></li> <li class=""><a href='/2021/callfordc'>Call for Doctoral Consortium (Closed)</a> <li class=""><a href='/2021/callforvolunteers'>Call for Volunteers (Closed)</a></li> <li class="divider"></li> <li class=""><a href='/2021/committees'>Committees</a></li> <li class=""><a href='/2021/sponsorship'>Sponsors and Supporters</a></li> <li class=""><a href='/2021/sponsorship_policy'>Sponsorship Policy</a></li> <li class="divider"></li> <li class=""><a href='/2021/press-release'>ACM Press release</a></li> <li class=""><a href='/2021/racialequityjustice'>Commitment to Racial Equity + Justice</a></li> <li class=""><a href='/2021/2021_online'>Covid-19 Update</a></li> </ul> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">2018-2020 <b class="caret"></b></a> <ul class="dropdown-menu"> <li class=""><a href='/2020/'>FAT*2020 Barcelona</a></li> <li class=""><a href='/2019/'>FAT*2019 Atlanta</a></li> <li class=""><a href='/2018/'>FAT*2018 New York</a></li> </ul> </li> </ul> <ul class="nav navbar-nav navbar-right"> <!--<li class=""><a href="/index.html">Home</a></li>--> <li><a href="https://facct-blog.github.io/">Blog</a></li> <li class=""><a href="/network/">Network</a></li> <li class=""><a href='/connect'>Connect</a> </li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Organization <b class="caret"></b></a> <ul class="dropdown-menu"> <!-- <li class=""><a href="/nominate.html">Open Call for SC Nominations</a></li> --> <li class="divider"></li> <li class=""><a href='/organization'>People & Committees</a></li> <li class=""><a href='/organization-sc'>Steering Committee</a></li> <li class=""><a href='/documents'>Governing Documents</a></li> <li class=""><a href='/harassment'>Anti-Discrimination & Harassment Policy</a></li> <li class=""><a href='/sponsorship'>Sponsorship Policy</a></li> <li class=""><a href='/faq'>FAQ</a></li> <li class="divider"></li> <li class=""><a href='/2023/humanrights'>EC Statement on Technology and Human Rights</a></li> <li class=""><a href='/warfare'>EC Statement on AI Warfare </a></li> </ul> </li> </ul> </div> <!--/.navbar-collapse --> </div> </div> <div class="container"> <div class="page-header"> <h1>ACM FAccT 2022 Keynote Speakers </h1> </div> </div> <div class="container"> <div class="row"> <h1><strong>Keynote Lectures</strong></h1> </div> </div> <div class="container"> <div class="row"> <h2><strong><u>Day 1</u></strong></h2> </div> </div> <div class="container"> <div class="row"> <h3><strong> The Future of News on Social Platforms </strong><h3> <h3 id="Cha"><a href="https://cs.kaist.ac.kr/people/view?idx=418&kind=faculty&menu=160"><strong>Meeyoung Cha (In-Person)</strong></a></h3> <span class="label label-success"><a href="https://youtu.be/eVJnNJOhn-k">Video <span class="glyphicon glyphicon-facetime-video"></span></a></span> <a href="https://cs.kaist.ac.kr/people/view?idx=418&kind=faculty&menu=160"><img src="/static/images/Cha_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>The majority of news discovery and reading now takes place on social media platforms. This means that the ranking of news stories is determined by the platform's algorithm, which uses engagement metrics like views, clicks, and shares rather than journalistic conventions. This talk will cover some of the social issues that algorithmic amplification causes, such as news ranking, clickbait, misinformation, and fact-checking bias. I will also share South Korea's experience with confronting the propagation of misinformation during COVID-19 and the Facts Before Rumors campaign that our group launched.</p> <p>About the Facts Before Rumors campaign (<a href="https://ibs.re.kr/fbr/">https://ibs.re.kr/fbr/</a>): During the early days of the pandemic, we launched an online campaign to debunk COVID-19 rumors that disseminated accurate coronavirus-related information to over 50,000 individuals in 151 countries. The campaign aimed to collect fact-checked information from regions that had already suffered from the infodemic and spread it to other regions where the infodemic was in its infancy. Alongside our campaign, we conducted a series of research projects to understand what kind of coronavirus-related information was being shared online. Focusing on misinformation, we quantified the spread of COVID-19 misinformation through survey studies.</p> <p>Meeyoung Cha is an associate professor at the Korea Advanced Institute of Science and Technology (KAIST). Her research is on data science with an emphasis on modeling socially relevant information propagation processes. Her work on misinformation, poverty mapping, fraud detection, and long-tail content has gained more than 17,000 citations. She worked at Facebook's Data Science Team as a Visiting Professor and is a recipient of the Korean Young Information Scientist Award and AAAI ICWSM Test of Time Award. She is currently jointly affiliated as a Chief Investigator at the Institute for Basic Science (IBS) in Korea.</p> </div> </div> <br> <br> <div class="container"> <div class="row"> <h2><strong><u>Day 2</u></strong></h2> </div> </div> <div class="container"> <div class="row"> <h3><strong>Towards Value Based NLP</strong></h3> <h3 id="fung"><a href="https://pascale.home.ece.ust.hk/"><strong>Pascale Fung (Online)</strong></a></h3> <span class="label label-success"><a href="https://youtu.be/86zfYMXgr9w">Video <span class="glyphicon glyphicon-facetime-video"></span></a></span> <a href="https://pascale.home.ece.ust.hk/"><img src="/static/images/Fung_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>The AI “arms race” has reached a point where different organizations in different countries are competing to build ever larger “language” models in text, in speech, in image and so on, trained from ever larger collections of databases. These pre-trained models have so far proven to be extremely powerful in enabling zero-shot and few-shot learning of new tasks. Meanwhile, with great power comes great responsibility. Our society in general, our users in particular, are demanding that AI technology be more responsible – more robust, fairer, more explainable, more trustworthy. Natural language processing technologies built on top of these large pre-trained language models are expected to align with these and other human “values” in deployment because they impact our lives directly.</p> <p>The core challenge of “value-aligned” NLP (or AI in general) is twofold: 1) What are these values and who defines them? 2) How can NLP algorithms and models be made to align with these values?</p> <p>In fact, different cultures and communities might have different approaches to ethical issues. Even when people from different cultures happen to agree on a set of common principles, they might disagree on the implementation of such principles. It is therefore necessary that we anticipate value definition to be dynamic and multidisciplinary. I propose that we should modularize the set of value definitions as external to the development of NLP algorithms, and that of large pretrained language models and encapsulate the language model to preserve its integrity. We also argue that value definition should not be left in the hands of NLP/AI researchers or engineers. At best, we can be involved at the stage of value definition but we engineers and developers should not be decision makers on what they should be. In addition, some values are now enshrined in legal requirements. This argues further that value definition should be disentangled from algorithm and model development. </p> <p>This history of NLP applications has so far focused on clearly defined task completion objectives – in answering factoid questions with unambiguous answers; in summarizing articles with salient information, in machine translation faithful to the source; in classifying texts into distinct categories; in task-oriented dialog systems to fulfil user queries and commands; etc. I propose that it behooves us to develop NLP systems with the ability to explicit align with human values. In this talk, I will present initial experiments on value based NLP where we allow the input to an NLP system to have human defined values or ethical principles for different output results. I propose that many NLP tasks, from classification to generation, should ouptput results according to human defined principles for better performance and explainability. I will introduce (1) AiSocrates, a new task that answers ethical quandary questions according to different moral philosophy principles; (2) initial experiments on sexism classification with different sexism definitions; and (3) NeuS, a multi-document news summarization system that filters out framing bias in the source news stories in order to provide an additional neutral perspective on events and news to the reader. </p> <p>Pascale Fung is a Professor at the Department of Electronic & Computer Engineering and Department of Computer Science & Engineering at The Hong Kong University of Science & Technology (HKUST), and a visiting professor at the Central Academy of Fine Arts in Beijing. She is an elected Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) for her "significant contributions to the field of conversational AI and to the development of ethical AI principles and algorithms", an elected Fellow of the Association for Computational Linguistics (ACL) for her “significant contributions towards statistical NLP, comparable corpora, and building intelligent systems that can understand and empathize with humans”. She is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE) for her “contributions to human-machine interactions” and an elected Fellow of the International Speech Communication Association for “fundamental contributions to the interdisciplinary area of spoken language human-machine interactions”. She is the Director of HKUST Centre for AI Research (CAiRE), an interdisciplinary research centre on top of all four schools at HKUST. She co-founded the Human Language Technology Center (HLTC). She is an affiliated faculty with the Robotics Institute and the Big Data Institute at HKUST. She is the founding chair of the Women Faculty Association at HKUST. She is an expert on the Global Future Council, a think tank for the World Economic Forum. She represents HKUST on Partnership on AI to Benefit People and Society. She is on the Board of Governors of the IEEE Signal Processing Society. She is a member of the IEEE Working Group to develop an IEEE standard - Recommended Practice for Organizational Governance of Artificial Intelligence. Her research team has won several best and outstanding paper awards at ACL, ACL and NeurIPS workshops.</p> </div> </div> <br> <br> <div class="container"> <div class="row"> <h2><strong><u>Day 3</u></strong></h2> </div> </div> <div class="container"> <div class="row"> <h3><strong>The Rise of Hypersocial Artificial Intelligence</strong></h3> <h3 id="Cuellar"><a href="https://carnegieendowment.org/experts/2135"><strong>Mariano-Florentino (Tino) Cuéllar (In-Person)</strong></a></h3> <span class="label label-success"><a href="https://youtu.be/accp-2zAi_w">Video <span class="glyphicon glyphicon-facetime-video"></span></a></span> <a href="https://carnegieendowment.org/experts/2135"><img src="/static/images/Cuellar_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>This talk will explore some of the key possibilities, risks, and questions associated with what could be called “hypersocial” artificial intelligence: increasingly sophisticated AI systems capable of shaping and carrying on complex social relationships between and with humans, across a variety of social contexts. From its earliest days, AI had the potential to help us understand and affect a major facet of the human experience facilitated by natural intelligence: our linguistic and social interactions at work, in our personal and economic lives, and in our civic engagement. Today our technology infrastructure is increasingly shaping social life given the convergence of machine learning techniques, internet platforms, growing computational power, norms supporting our routine use of computers in daily life, and vast troves of data. Given the progress achieved even in the last few years with foundation models, the world will likely experience further advances in the capabilities of AI systems and greater deployment of these systems in social, economic, and political spheres. We may eventually interact with and depend on AI technology that becomes increasingly fluent in those social domains –– of conversation and camaraderie, parenting, collective deliberation, citizenship, diplomacy, and critical decision-making about value-laden issues –– that (for many people) help define what it means to be human. Hypersocial AI can spur improvements in the human condition but also cultural and political conflict, and ultimately, choices among goals that will sometimes be difficult or impossible to reconcile.</p> <p>Beyond the possible impacts of hypersocial AI on our daily lives, what trade-offs and governing dilemmas should we bear in mind as we consider how much to incorporate hypersocial AI into the web of human relationships? How might hypersocial AI shape geopolitical dynamics? Most fundamentally, hypersocial AI may force us to ponder the implications of differing visions for the future of AI that draw from the work of scholars who have prioritized AI as a means for optimizing decision-making and that of thinkers who have focused more on AI research as an intellectual domain for understanding the complexities of human discourse, collaboration, and value-choices.</p> <p>Mariano-Florentino (Tino) Cuéllar is the president of the Carnegie Endowment for International Peace –– the oldest think tank in America and the only one dedicated to pursuing global security and peace through its operations in the United States and India, China, Belgium, Lebanon, and Russia. Cuéllar previously served for nearly seven years as a justice on the Supreme Court of California, the highest court of America’s largest judiciary, and led the courts’ efforts to better meet the needs of millions of limited-English speakers. Before that, he was the Stanley Morrison Professor at Stanford Law School and director of Stanford University’s Freeman Spogli Institute for International Studies. He served two U.S. presidents in a variety of roles in the federal government, including as special assistant to the president for justice and regulatory policy at the White House in the Obama administration. He chairs the board of the William and Flora Hewlett Foundation. Born in Matamoros, Mexico, he grew up primarily in communities along the U.S.-Mexico border. He graduated from Harvard and Yale Law School, and obtained a Ph.D. in political science from Stanford.</p> </div> </div> <br> <br> <div class="container"> <div class="row"> <h2><strong><u>Day 4</u></strong></h2> </div> </div> <div class="container"> <div class="row"> <h3><strong>It's Not the Data: weak tie algorithmic sociality and digital culture</strong></h3> <h3 id="brock"><a href="https://www.lmc.gatech.edu/people/person/andre-brock"><strong>Andre Brock (Online)</strong></a></h3> <span class="label label-success"><a href="https://youtu.be/Nx3N1961t08">Video <span class="glyphicon glyphicon-facetime-video"></span></a></span> <a href="https://www.lmc.gatech.edu/people/person/andre-brock"><img src="/static/images/Brock_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Conversations around the spread of mis/disinformation often revolve around the role of algorithms in disseminating and radicalizing racialized conspiracies to “naive” internet users. This is most immediately obvious in reporting and research on the viral spread of QAnon conspiracy theories on Instagram, YouTube, and other social media services. I offer instead that the concept of weak tie racism can help us to understand how racism becomes encoded into algorithmic and other computational modes (e.g., neural nets and Twitter’s recent image crop “problem”), in the process stimulating libidinal economies of white supremacist ideology latent in Western online populations.</p> <p> Weak tie racism, briefly stated, is a concept i introduced in Distributed Blackness (2020). It is based on Granovetter's (1973) hugely influential argument that weak ties are enormously productive in spreading information. my contribution to this theory, drawing from social informatics, that instead of conceptualizing the ties between individuals as based on comity, we should consider racism as the glue between enculturated social nodes. Moreover, the role of computation in sharing weak-tie information (e.g., how we have come to rely on search engines for authoritative information because algorithm) leads to my argument that the computer itself is an agent and practitioner of racist activity and belief as it uncritically shapes the ways discourses around race and racism are presented, shared, and discussed. </p> <p>André Brock is an associate professor of media studies at Georgia Tech. He writes on Western technoculture and Black cybercultures; his scholarship examines race in social media, video games, blogs, and other digital media. His book, *Distributed Blackness: African American Cybercultures*, (NYU Press 2020), the 2021 winner of the Harry Shaw and Katrina Hazzard-Donald Award for Outstanding Work in African-American Popular Culture Studies and the 2021 Nancy Baym Book Award, theorizes Black everyday lives mediated by networked technologies.</p> </div> </div> <br> <br> <div class="container"> <div class="row"> <h1><strong>Keynote Panels</strong></h1> </div> </div> <div class="container"> <div class="row"> <h2><strong><u>Day 1</u></strong></h2> </div> </div> <div class="container"> <div class="row"> <h3 id ="implement"><strong>Keynote Panel: Implementing Intersectionality in Algorithmic Fairness</strong></h3> <span class="label label-success"><a href="https://youtu.be/Ob0uRVLvZhY">Video <span class="glyphicon glyphicon-facetime-video"></span></a></span> <p>Intersectionality has become an important area of research in the detection and mitigation of algorithmic bias. Often the ways in which discrimination manifests in sociotechnical systems can be hidden when evaluating fairness on distinct demographic features, without considering how identities and experiences might intersect in unique ways. For example, in her seminal work on intersectionality, Kimberlee Crenshaw highlighted the ways in which U.S. antidiscrimination law failed to protect Black women from discrimination in contexts where their intersectional experience was not just the sum of racism and sexism. Reflecting intersectionality in practice, however, can be difficult given the many possible identities of interest and the nuanced ways in which they can interact. This panel will explore different possible approaches for implementing intersectionality in algorithmic fairness.</p> </div> </div> <div class="container"> <div class="row"> <h3 id="foulds"><a href="http://jfoulds.informationsystems.umbc.edu/"><strong> James Foulds (In-person)</strong></a></h3> <a href="http://jfoulds.informationsystems.umbc.edu/"><img src="/static/images/foulds_headshot.jpeg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Dr. James Foulds is an Assistant Professor in the Department of Information Systems at the University of Maryland, Baltimore County (UMBC). His research aims to improve the role of artificial intelligence in society by addressing issues of fairness, bias, and privacy, and by promoting the practice of computational social science. His master's and bachelor's degrees were earned with first-class honours at the University of Waikato, New Zealand, where he also contributed to the Weka data mining system. He earned his Ph.D. in computer science at the University of California, Irvine, and was a postdoctoral scholar at the University of California, Santa Cruz, followed by the University of California, San Diego. His research in socially conscious artificial intelligence and machine learning has been supported by the NSF CAREER award, the NSF CISE Research Initiation Initiative (CRII) Award, and several other grants from NSF and NIST. He has served in organizing roles for the AISTATS conference, the ITA workshop, and the Ethics in Data Science Pedagogy workshop (EDSP).</p> </div> </div> <div class="container"> <div class="row"> <h3 id="youjin"><a href=""><strong>Youjin Kong (In-person)</strong></a></h3> <a href="https://www.youjinkong.com/"><img src="/static/images/youjin_headshot.jpeg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Youjin Kong will be an Assistant Professor in Philosophy at the University of Georgia, starting in August 2023. She is currently a Visiting Assistant Professor in Philosophy at Oregon State University. Located at the nexus of ethics of AI, social-political philosophy, and feminist philosophy, her research critically analyzes how AI reproduces gender and racial injustice, and develops philosophical frameworks for promoting fairness in AI. She is also committed to advancing the field of Asian American feminist philosophy, which remains underrepresented in the philosophy literature. She teaches courses on ethical issues arising in emerging technologies, as well as courses in social and moral philosophy.</p> </div> </div> <div class="container"> <div class="row"> <h3 id="rankin"><a href=""><strong>Yolanda Rankin (Online)</strong></a></h3> <a href="https://yolandarankin.com/"><img src="/static/images/rankin_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Dr. Yolanda A. Rankin is an Assistant Professor in the <a href="https://ischool.cci.fsu.edu/">School of Information</a> at Florida State University. As the Director of the <strong>DE</strong>signing <strong>T</strong>echn<strong>O</strong>logies for the <strong>U</strong>nde<strong>R</strong>served (DETOUR) Research Lab, she merges Black feminist epistemologies with participatory design practices to understand the diverse perspectives and information needs of minoritized populations who are often denied or given limited access as consumers to technology. Leveraging Black feminist thought and intersectionality as critical frameworks, her research reveals (1) how intersecting identities and systems of power impact Black women’s ability to persist in the field of computing and (2) how centering the lived experiences of Black people and other historically excluded populations contributes to more equitable design practices. A McKnight Fellow (2020-2021) and a Woodrow Wilson Fellow (2016), Dr. Rankin has published more than 40 peer-reviewed publications, including journal articles, conference papers, and books. Prior to entering academia, she accumulated more than twelve years of industry experience while employed at IBM Research Lab – Almaden in San Jose, CA and Lucent Technologies Bell Labs in Naperville, IL. Yolanda completed her Ph.D. in Computer Science at Northwestern University, her M.A. in Computer Science at Kent State University, and her B.S. in Mathematics at Tougaloo College, a historically Black college in Jackson, Mississippi. </p> </div> </div> <div class="container"> <div class="row"> <h3 id="olga"><a href=""><strong>Olga Russakovsky (Online)</strong></a></h3> <a href="https://www.cs.princeton.edu/~olgarus/"><img src="/static/images/olga_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Dr. Olga Russakovsky is an Assistant Professor in the Computer Science Department at Princeton University. Her research is in computer vision, closely integrated with the fields of machine learning, human-computer interaction and fairness, accountability and transparency. She has been awarded the AnitaB.org's Emerging Leader Abie Award in honor of Denice Denton in 2020, the CRA-WP Anita Borg Early Career Award in 2020, the MIT Technology Review's 35-under-35 Innovator award in 2017, the PAMI Everingham Prize in 2016 and Foreign Policy Magazine's 100 Leading Global Thinkers award in 2015. In addition to her research, she co-founded and continues to serve on the Board of Directors of the AI4ALL nonprofit dedicated to increasing diversity and inclusion in Artificial Intelligence (AI). She completed her PhD at Stanford University in 2015 and her postdoctoral fellowship at Carnegie Mellon University in 2017.</p> </div> </div> <div class="container"> <div class="row"> <h3 id="dwork"><a href="https://dwork.seas.harvard.edu/"><strong> Cynthia Dwork (Online)</strong></a></h3> <a href="https://dwork.seas.harvard.edu/"><img src="/static/images/dwork_headshot.jpeg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Cynthia Dwork, Gordon McKay Professor of Computer Science at Harvard, Affiliated Faculty at Harvard Law School and Department of Statistics, and Distinguished Scientist at Microsoft, is renowned for placing privacy-preserving data analysis on a mathematically rigorous foundation. She has also made seminal contributions in algorithmic fairness, cryptography, and distributed computing. Her current focus is on the theory of algorithmic fairness. Dwork is the recipient of numerous awards including the IEEE Hamming Medal, the Gödel Prize, and the ACM Paris Kanellakis Theory and Practice Award. Dwork is a member of the US National Academy of Sciences and the US National Academy of Engineering, and is a Fellow of the American Academy of Arts and Sciences and the American Philosophical Society.</p> </div> </div> <br> <br> <div class="container"> <div class="row"> <h2><strong><u>Day 2</u></strong></h2> </div> </div> <div class="container"> <div class="row"> <h3 id = "boss"><strong>Keynote Panel: Bossware and Algorithmic Management</strong></h3> <span class="label label-success"><a href="https://youtu.be/aeUkUzCFsz4">Video <span class="glyphicon glyphicon-facetime-video"></span></a></span> <p>In recent years, accelerated by the global pandemic, employers in all economic sectors, from low wage to high tech, have introduced computational tools that reduce workers' digital and physical autonomy, while transferring value from employees to employers, and risk in the other direction. This keynote panel, featuring scholars from across STS, law, computer science and philosophy, will explore the rise of bossware and algorithmic management, highlighting not only the key empirical trends, but also their underlying causes, and how workers, researchers, and regulators can fight back.</p> </div> </div> <div class="container"> <div class="row"> <h3 id="Borgesius"><a href=""><strong>Frederik Zuiderveen Borgesius (In-Person)</strong></a></h3> <a href="https://www.ru.nl/personen/zuiderveen-borgesius-f/ "><img src="/static/images/borgesius_headshot.jpeg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Prof Frederik Zuiderveen Borgesius is Professor ICT and Law at Radboud University Nijmegen, where he is affiliated with Radboud's interdisciplinary research hub on digitalization and society: the iHub. His research mostly concerns human rights, such as the right to privacy, to the protection of personal data, and to non-discrimination, in the context of new technologies. <p>In 2019, he wrote <a href="https://www.coe.int/en/web/european-commission-against-racism-and-intolerance/-/-discrimination-artificial-intelligence-and-algorithmic-decision-making- ">a report on discrimination, artificial intelligence and algorithmic decision-making</a> for the Council of Europe.</p> <p>More info at:</p> <ul> <li><a href="https://www.ru.nl/personen/zuiderveen-borgesius-f/ ">https://www.ru.nl/personen/zuiderveen-borgesius-f/</a></li> <li><a href="https://twitter.com/FBorgesius">https://twitter.com/FBorgesius</a></li> <li><a href="https://works.bepress.com/frederik-zuiderveenborgesius/ ">https://works.bepress.com/frederik-zuiderveenborgesius/ </a></li> </ul> </p> </div> </div> <div class="container"> <div class="row"> <h3 id="lee"><a href="http://minlee.net/"><strong>Min Kyung Lee (In-Person)</strong></a></h3> <a href="http://minlee.net//"><img src="/static/images/Lee_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Min Kyung Lee is an assistant professor in human-computer interaction in the School of Information at the University of Texas at Austin. Dr. Lee has conducted some of the first studies that empirically examine the social implications of algorithms’ emerging roles in management and governance in society. She has extensive expertise in developing theories, methods and tools for human-centered AI and deploying them in practice through collaboration with real-world stakeholders and organizations. She developed a participatory framework that empowers community members to design matching algorithms that govern their own communities. Dr. Lee is a Siebel Scholar and has received the Allen Newell Award for Research Excellence, research grants from NSF and Uptake, and six best paper awards and honorable mentions and two demo/video awards in venues such as CHI, CSCW, DIS and HRI. She is an Associate Editor of Human-Computer Interaction and a Senior Associate Editor of ACM Transactions on Human-Robot Interaction. </p> </div> </div> <div class="container"> <div class="row"> <h3 id="wilneida"><a href="https://wilneida.com/"><strong>Wilneida Negron (In-Person)</strong></a></h3> <a href="https://wilneida.com/"><img src="/static/images/Wilneida_headshot.jpeg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Dr Wilneida Negron is the Director of Research and Policy at Coworker.org. She most recently worked at the Ford Foundation, where she led cross-thematic area strategy development between the Gender, Race, Ethnic Justice, Technology and Society, Mission Investing, Future of Work(ers), and Civic Engagement Thematic areas, with a focus on helping labor movements deepen and leverage economic partnerships and movement-based partnerships. She is a lifelong fellow at Data & Society Research Institute and Atlantic Institute for Racial Equity.</p> </div> </div> <div class="container"> <div class="row"> <h3 id="qadri"><a href="https://ridaqadri.net/"><strong>Rida Qadri (In-Person)</strong></a></h3> <a href="https://ridaqadri.net/"><img src="/static/images/qadri_headshot.jpeg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Rida is a Research Scientist with Google's Ethical AI team. She received her PhD in Computational Urban Science from the Massachusetts Institute of Technology. Her research examines how communities respond to, repair and resist algorithmic systems in non-western urban spaces. She is particularly interested in making visible the algorithmic failures and frictions caused by culturally inappropriate technological design. </p> </div> </div> <br> <br> <div class="container"> <div class="row"> <h2><strong><u>Day 3</u></strong></h2> </div> </div> <div class="container"> <div class="row"> <h3 id = "community"><strong>Community Keynote: How to Bargain with a Black Box: Auditing an Algorithmic Pay Change With a Community-Led Audit</strong></h3> <span class="label label-success"><a href="https://www.youtube.com/watch?v=TgxrcohfsGg">Video <span class="glyphicon glyphicon-facetime-video"></span></a></span> <p>This keynote video and panel discussion demonstrates how academics and practitioners can partner with community groups to audit real-world algorithmic systems. In 2020, just as the pandemic began, app-based workers for Shipt, Target’s delivery company, began reporting falling earnings. They challenged Shipt’s claims that a new opaque black box pay algorithm fairly rewarded workers for “effort”. The video featured in this keynote details how workers partnered with an academic researcher to independently evaluate this new algorithm’s impact through community research and design. They co-designed a tool—an SMS chat bot—that collected and analyzed over 200 worker's pay histories to perform a “real world” audit of Shipt’s new algorithm. This audit showed that the new algorithm effectively cut over 40% of workers’ pay. The resulting findings fueled an organizing campaign that made national headlines. This keynote argues that to have real-world impact, researchers and practitioners of algorithmic fairness should partner with—and be guided by—workers and others who are most effected by automated systems.</p> </div> </div> <div class="container"> <div class="row"> <h3 id="ambrogi"><a href="https://www.linkedin.com/in/drewambrogi/"><strong>Drew Ambrogi (Online)</strong></a></h3> <a href="https://www.linkedin.com/in/drewambrogi/"><img src="/static/images/ambrogi_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Drew Ambrogi serves as Coworker.org’s Digital Director where he leverages digital tools to support workers’ efforts to build power. Drew also leads Coworker’s projects with gig workers including worker-led research, campaign strategy and support, and policy work. Before joining Coworker.org in 2019, Drew worked as a digital strategist at a national racial justice organization, where he served as President of their staff union, and on the side as a communications consultant for grassroots community organizations. Drew currently lives and works in Washington, DC. </p> </div> </div> <div class="container"> <div class="row"> <h3 id="Bain"><a href="http://www.gigworkerscollective.org"><strong>Vanessa Bain (In-Person)</strong></a></h3> <a href="http://www.gigworkerscollective.org"><img src="/static/images/Bain_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Vanessa Bain is a former educator from Silicon Valley that began working full-time as an Instacart Shopper in 2016. She has been grassroots organizing Instacart Shoppers for over five years and has organized several national walkouts, boycotts, and direct actions over labor grievances, worker classification, and workplace safety in the grocery gig economy. In January 2020, Bain cofounded Gig Workers’ Collective, a worker collective that fosters worker-led worker organizing and advocacy in the gig economy. Bain has presented at UC Berkeley, Cornell, and was a featured presenter at last year's American Institute for Public Health's Annual Meeting on the topic of Organizing the Gig Economy for Social Justice. Bain is passionate about building worker power, policy, intersectional organizing, social and economic justice. Bain has been profiled by the Washington Post.</p> </div> </div> <div class="container"> <div class="row"> <h3 id="calacci"><a href="https://www.media.mit.edu/people/dcalacci/overview/"><strong>Dan Calacci (In-person)</strong></a></h3> <a href="https://www.media.mit.edu/people/dcalacci/overview/"><img src="/static/images/calacci_headshot.jpeg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Dan is a doctoral candidate at the MIT Media Lab studying how data stewardship and analysis can impact community governance. His current work investigates how data rights and labor rights intersect and explores how to build real-world tools that help workers build power by leveraging the data they generate at work. Their research on community surveillance and tools for gig workers have been discussed in Gizmodo, Wired, Reuters, the New York Times, and other major publications. They have exhibited digital art that addresses themes such as urban inequality and digital surveillance in galleries around the globe. They also have experience as a startup co-founder, a machine learning researcher, and a data science bootcamp teacher. They received their B.S. from Northeastern University in 2015 and their M.Sc from MIT in 2019. </p> </div> </div> <div class="container"> <div class="row"> <h3 id="Solis"><a href="https://www.shiptlist.org/"><strong>Willy Solis (In-Person)</strong></a></h3> <a href="https://www.shiptlist.org/"><img src="/static/images/Solis_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Willy Solis is a Shipt Shopper from the Dallas Texas Metroplex. Willy has a background in construction and has run his own business since 2008. In 2019, Willy began working as a Shipt Shopper and began grassroots organizing his fellow Shipt Shoppers in January 2020 when Shipt implemented a devastating pay cut. In February 2020 he formalized a relationship with Gig Workers' Collective where he has functioned as the lead organizer for Shipt Shoppers nationally. Organizing successes include securing PPE for hundreds of thousands of Shipt Shoppers, and the repayment of hundreds of thousands of dollars in misappropriated tips. Willy has partnered with organizations such as Human Rights Watch, MIT, and Coworker on generative worker-centric research and data about the gig economy. Solis is passionate about building worker power, policy, social and economic justice. In the past year, Solis is frequently a featured panelist and presenter on gig economy issues, most recently he was a featured speaker at SXSW. Solis has been profiled by NPR and The Hill.</p> </div> </div> <div class="container"> <div class="row"> <h3 id="spitzberg"><a href=""><strong>Danny Spitzberg (Online)</strong></a></h3> <a href="https://twitter.com/daspitzberg"><img src="/static/images/spitzberg_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Danny is a UX researcher for a cooperative economy. He currently is Lead Researcher with Turning Basin Labs, a staffing and training co-op based in California, facilitating worker- and user-led studies. Previously, Danny worked with a variety of community-owned digital tools including Up & Go, a platform for booking home cleanings in NYC. He also created the Ownership Model Canvas with the co-op accelerator Start.coop, and organized with Exit to Community Collective creating resources for startups to build community leadership and ownership. Danny believes everyone can do influential, standardized, and politically imaginative research.</p> </div> </div> <br> <br> <div class="container"> <div class="row"> <h2><strong><u>Day 4</u></strong></h2> </div> </div> <div class="container"> <div class="row"> <h3 id = "sphere" ><strong>Keynote Panel: Algorithmic Governance of the Public Sphere</strong></h3> <span class="label label-success"><a href="https://youtu.be/GgT5zVsYdDs">Video <span class="glyphicon glyphicon-facetime-video"></span></a></span> <p>Media companies have always curated the public sphere of the political community where they operate. They shape the information environment in which the community deliberates about collective action—deciding what is included and excluded, what is amplified and reduced. But in the age of mass participation in social media, curation is not just about selecting content, it is about governing the social relations from which that content emerges—shaping not only what we may learn or see, but how we relate to one another as members of a political community. Increasingly, this governance is algorithmic: recommender systems determine what speech to amplify, what to reduce, and (often with some human oversight) what to remove. The social impacts of algorithmic governance of the public sphere are highly contested; the paramount importance of using these tools more effectively to realise our social and political ideals surely less so. This keynote panel brings together scholars from communication studies, philosophy, law and computer science to better understand the nature of algorithmic governance of online speech, and to propose regulatory and technological paths forward.</p> </div> </div> <div class="container"> <div class="row"> <h3 id="clark"><a href="https://www.meredithdclark.com/"><strong>Meredith Clark (Online)</strong></a></h3> <a href="https://www.meredithdclark.com/"><img src="/static/images/clark_headshot.jpeg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>(@MeredithDClark; she/her/hers) is an associate professor in Journalism and Department of Communication Studies at Northeastern University. Her research focuses on the intersections of race, media, and power. Her first book, “We Tried to Tell Y’all: Black Twitter and Digital Counternarratives” is forthcoming from Oxford University Press. Her research has also been published in Communication & the Public, Communication, Culture & Critique, Electronic News, Journalism & Mass Communication Educator, Journal of Social Media in Society, New Media & Society, and Social Movement Studies. She’s been quoted in The New York Times, The Washington Post, The Associated Press, and has been a guest on “Full Frontal with Samantha Bee,” as well as NPR’s “All Things Considered” and “Code Switch,” among other media appearances. She was a 2020-2021 faculty fellow with the Data & Society Research Institute.</p> <p>Clark is currently serving a four-year leadership term with the Association for Education in Journalism & Mass Communication’s Council of Divisions, where she was formerly chair of the Commission on the Status of Women. A longtime member of the National Association of Black Journalists, she was the faculty advisor for the UVA chapter of the NABJ from 2017-2021. Clark currently serves as a faculty affiliate at the Center on Digital Culture & Society at the University of Pennsylvania, and academic lead for “Documenting the Now,” a community-based digital archives project supported by the Andrew W. Mellon Foundation. Clark also sits on the advisory boards for Project Information Literacy (Harvard University); the Center for Critical Race and Digital Studies (New York University); and the news nonprofit, Report for America. She oversaw the annual Newsroom Diversity Survey for News Leaders Association (formerly the American Society of News Editors) from 2018 - 2021.</p> </div> </div> <div class="container"> <div class="row"> <h3 id="gillespie"><a href="https://www.microsoft.com/en-us/research/people/tarleton/"><strong>Tarleton Gillespie (Online)</strong></a></h3> <a href="https://www.microsoft.com/en-us/research/people/tarleton/"><img src="/static/images/Gillespie_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Tarleton Gillespie is a senior principal researcher at Microsoft Research, and an affiliated associate professor in the Department of Communication and Department of Information Science at Cornell University. He is the author of Wired Shut: Copyright and the Shape of Digital Culture (MIT, 2007) , co-editor of Media Technologies: Essays on Communication, Materiality, and Society (MIT, 2014), and author of Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (Yale, 2018). </p> </div> </div> <div class="container"> <div class="row"> <h3 id="keller"><a href="https://law.stanford.edu/directory/daphne-keller/"><strong>Daphne Keller (Online)</strong></a></h3> <a href="https://law.stanford.edu/directory/daphne-keller/"><img src="/static/images/Keller_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <ul> <li>Director of Program on Platform Regulation, Cyber Policy Center</li> <li>Lecturer, Stanford Law School</li> </ul> <p>Daphne Keller's work focuses on platform regulation and Internet users' rights. She has testified before legislatures, courts, and regulatory bodies around the world, and published both <a href="https://btlj.org/data/articles2018/vol33/33_1/Keller_Web.pdf">academically</a> and in <a href="https://www.nytimes.com/2017/06/12/opinion/making-google-the-censor.html">popular press</a> on topics including platform content moderation practices, constitutional and human rights law, copyright, data protection, and national courts' global takedown orders. Her recent <a href="https://www.hoover.org/research/who-do-you-sue">work</a> focuses on legal protections for users’ free expression rights when state and private power intersect, particularly through platforms’ enforcement of Terms of Service or use of algorithmic ranking and recommendations. Until 2020, Daphne was the Director of Intermediary Liability at Stanford's Center for Internet and Society. She also served until 2015 as Associate General Counsel for Google, where she had primary responsibility for the company’s search products. Daphne has taught Internet law at Stanford, Berkeley, and Duke law schools. She is a graduate of Yale Law School, Brown University, and Head Start. </p> </div> </div> <div class="container"> <div class="row"> <h3 id="kleinberg"><a href="https://www.cs.cornell.edu/home/kleinber/"><strong>Jon Kleinberg (Online)</strong></a></h3> <a href="https://www.cs.cornell.edu/home/kleinber/"><img src="/static/images/Kleinberg_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on the interaction of algorithms and networks, the roles they play in large-scale social and information systems, and their broader societal implications. He is a member of the National Academy of Sciences and National Academy of Engineering, and serves on the US National AI Advisory Committee. He has received MacArthur, Packard, Simons, Sloan, and Vannevar Bush research fellowships, as well awards including the Harvey Prize, the Nevanlinna Prize, the Newell Award, and the ACM Prize in Computing. </p> </div> </div> <br> <br> <div class="container"> <div class="row"> <h3 id = "conver"><strong>Keynote Interview: Karen Hao in Conversation with William Isaac </strong></h3> <span class="label label-success"><a href="https://youtu.be/9u-62Ijtb1I">Video <span class="glyphicon glyphicon-facetime-video"></span></a></span> <span class="label label-success"><a href="https://youtu.be/BL24yOwXahw">Video <span class="glyphicon glyphicon-facetime-video"></span></a></span> <h3 id="hao"><a href="https://www.wsj.com/news/author/karen-hao"><strong>Karen Hao (Online)</strong></a></h3> <a href="https://www.wsj.com/news/author/karen-hao"><img src="/static/images/Hao_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>Karen Hao is a HK-based reporter at the Wall Street Journal, covering tech & society in China. She was previously a senior editor at MIT Technology Review, covering cutting-edge AI research and its impacts on society. Her work is regularly taught in universities, including Harvard, Stanford, and Yale, and cited in government reports and by Congress. She has won numerous awards, including an ASME Next, the highest honor for magazine journalists under 30. In a past life, she was an application engineer at the first startup to spin out of Alphabet's X. She received her B.S. in mechanical engineering and minor in energy studies from MIT.</p> </div> </div> <div class="container"> <div class="row"> <h3 id="isaac"><a href="https://wsisaac.com/"><strong>William Isaac (Online)</strong></a></h3> <a href="https://wsisaac.com/"><img src="/static/images/isaac_headshot.jpg" hspace="6" align="right" style="border: solid 1px #ddd; width: 30%; max-width: 240px; min-width: 80px !important;"/></a> <p>William Isaac is a Staff Research Scientist at DeepMind, Advisory Board Member of the Human Rights Data Analysis Group, and Research Affiliate at Oxford University Centre for the Governance of AI. His research focuses on the societal impact and governance of emerging technologies. Prior to DeepMind, William served as an Open Society Foundations Fellow. His research has been featured in publications such as Science, New York Times, and the MIT Technology Review.</p> </div> </div> <footer class="footer navbar-inverted"> <div class="container"> <div class="row"> <div class="col-lg-6"> Sponsored by: <img src="/static/images/acm_logo_tablet.svg" alt="Association of Computing Machinery"> </div> <div class="col-lg-6 text-right"> <p>CC-BY 2024 ACM FAccT Conference.</p> <p>Updated Fri, 15 Nov 2024 09:28:07 +0000.</p> </div> </div> </div> </footer> <script src="/static/js/vendor/jquery-1.10.1.js"></script> <script src="/static/js/vendor/bootstrap.min.js"></script> <script src="/static/js/main.js"></script> </body> </html>