CINXE.COM

Page Title

<!DOCTYPE html> <html lang="en"> <head> <title>Page Title</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <style> /* Style the body */ body { font-family: Perpetua; margin: 0; } /* Header/Logo Title */ .header { padding: 5px; height: 100px; text-align: center; background: #1abc9c; color: white; font-size: 20px; } /* Page Content */ .container { width: 1024px; min-height: 200px; margin: 0 auto; /* top and bottom, right and left */ border: 1px hidden #000; /* border: none; */ text-align: center; padding: 1em 1em 1em 1em; /* top, right, bottom, left */ color: #111e42; background: #f5f5f5; } </style> </head> <body style="background-color:#111e42;"> </br> </br> </br> <div class="container"> <h1>Lisa Anne Hendricks</h1> <table width="90%" align="center" border="0" cellpadding="10"> <tr> <td width="50%" valign="top"> <figure> <img src="img1.jpg" width=80% > <figcaption>Celebrating after my thesis talk!</figcaption> </figure> </td> <td width="50%"> <p>I am a research scientist on the Language Team at DeepMind. I received my PhD from Berkeley in May 2019, and a BSEE (Bachelor of Science in Electrical Engineering) from Rice University in 2013. My research focuses on the intersection of language and vision. I am particularly interested in analyzing why models work (see <a href="https://arxiv.org/pdf/2102.00529.pdf">here</a>), explainability (see <a href="https://arxiv.org/pdf/1807.09685.pdf">here</a> and <a href="https://arxiv.org/pdf/1803.09797.pdf">here</a>), and mitigating/measuring bias in AI models (see <a href="https://arxiv.org/pdf/1803.09797.pdf">here</a>). I recently led the fairness analysis on Gopher, DeepMind's large language model (see <a href=https://arxiv.org/pdf/2112.11446.pdf>here</a>). Recently, I've given a few talks on measuring and mitigating bias in language models. See <a href=UCL-ReadingGroup.pdf> here</a> for a version I recently gave at UCL. I have a very <a href="max.jpg">handsome cat</a> named Softmax Power (Max for short). </p> <p><a href="hendricks_cv.pdf">CV</a> / <a href="https://github.com/LisaAnne">GitHub</a> / <a href="https://scholar.google.com/citations?user=pvyI8GkAAAAJ&hl=en&oi=ao">Google Scholar</a></p> <p>You can reach me by emailing: lisa dot a dot hendricks at gmail dot com.</p> </td> </tr> </table> <!-- <div align=left> <h1 align=left>Selected Publications</h1> <ul> <li align=left>"Probing Multimodal Transformers for Verb Understanding." in <em>Findings of ACL 2021</em> with Aida Nematzadeh</li> <li align=left>"Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers." in <em>TACL 2021</em> with John Mellor, Rosalia Schneider, Jean-Baptiste Alayrac and Aida Nematzadeh</li> <li align=left>Gender bias</li> <li align=left>explanation</li> <li align=left>"Localizing Moments in Video with Natural Language." in <em>ICCV 2017</em> with Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Work done during my 2016 Adobe internship, and followed by "Localizing Moments in Video with Natural Language." at EMNLP 2018.</li> <li align=left>novel object captioning</li> </ul> </div> --!> </div> </br> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10