CINXE.COM
AZ Festschrift Resources Page
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"/> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="description" content="AZ Festschrift Resources Page"> <!-- Theme CSS --> <link rel="stylesheet" href="assets/css/style.css"> <link rel="stylesheet" href="assets/css/responsive.css"> <title>AZ Festschrift Resources Page</title> </head> <header> <br><br> <h1>AZ Festschrift Resources Page</h1> <hr> </header> <body> <br> <h2> Slides and Recordings</h2> <div id="resource"> </div> <br> <style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-size:18px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-size:18px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-fymr{border-color:inherit;font-weight:bold;text-align:left;vertical-align:top} .tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top} </style> <table class="tg"> <thead> <tr> <th class="tg-fymr"><b>Thursday Sep 1st. 2022</b></th> <th class="tg-fymr"><b>Speaker</b></th> <th class="tg-fymr"><b>Title</b></th> <th class="tg-fymr"><b>Resources</b></th> </tr> </thead> <tbody> <tr> <td class="tg-0pky" rowspan="3">1. Geometry/shape<br>(chair: Andrew Blake)</td> <td class="tg-0pky">Joe Mundy</td> <td class="tg-0pky">AZ in the age of geometry</td> <td class="tg-0pky">--</td> </tr> <tr> <td class="tg-0pky">Lourdes Agapito</td> <td class="tg-0pky">Learning 3D Shape Representations from Video</td> <td class="tg-0pky">--</td> </tr> <tr> <td class="tg-0pky">Richard Hartley</td> <td class="tg-0pky">Shape and luminance modelling using flow fields</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Hartley_Zisserman-festschrift-3.pdf'>[PDF(1MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Richard Hartley.mp4'>[MP4(3.9GB)]</a></td> </tr> <tr> <td class="tg-0pky" rowspan="3">2. Video analysis<br>(chair: Josephine Sullivan)</td> <td class="tg-0pky">Dima Damen</td> <td class="tg-0pky">Opportunities in Egocentric Video understanding</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Damen_AZ-Sep2022.pdf'>[PDF(7MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Dima Damen.mp4'>[MP4(3.7GB)]</a></td> </tr> <tr> <td class="tg-0pky">Josef Sivic</td> <td class="tg-0pky">Objects, video, from weak to no supervision</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Sivic_AZ_Oxford_Sep2022.pdf'>[PDF(54MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Josef Sivic.mp4'>[MP4(3.5GB)]</a></td> </tr> <tr> <td class="tg-0pky">Michael Black</td> <td class="tg-0pky">Buffy, Lola, and Phil: Looking backwards and forwards at people in video</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Black_AZFestschrift_final_mjb.pdf'>[PDF(109MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Michel Black.mp4'>[MP4(3.9GB)]</a></td> </tr> <tr> <td class="tg-0pky" rowspan="3">3. Image analysis<br>(chair: Luc Van Gool)</td> <td class="tg-0pky">Jean Ponce</td> <td class="tg-0pky">Physical models and machine learning for photography and astronomy</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Ponce_az22.pdf'>[PDF(21MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Jean Ponce.mp4'>[MP4(3.5GB)]</a></td> </tr> <tr> <td class="tg-0pky">David Forsyth</td> <td class="tg-0pky">Intrinsic Images and Equivariance, or Schenectady Redidivus</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Forsyth_slidesforjenny-small.pdf'>[PDF(3MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/David Forsyth.mp4'>[MP4(3.8GB)]</a></td> </tr> <tr> <td class="tg-0pky">Michael Brady</td> <td class="tg-0pky">Quantitative MR image analysis & the metabolic syndrome</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Brady_AZ Talk Fina.pdf'>[PDF(15MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Micheal Brady.mp4'>[MP4(4.9GB)]</a></td> </tr> <tr> <td class="tg-0pky" rowspan="3">4. Visual action<br>(chair: Antonio Criminisi)</td> <td class="tg-0pky">Ivan Laptev</td> <td class="tg-0pky">Recognizing human actions: the past and the future</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Laptev_Oxford22sept.pdf'>[PDF(6MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Ivan Laptev.mp4'>[MP4(3.5GB)]</a></td> </tr> <tr> <td class="tg-0pky">Jitendra Malik</td> <td class="tg-0pky">Perception and Action</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Malik_RMA-AZ-festschrift.pdf'>[PDF(35MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Jitendra Malik.mp4'>[MP4(4.0GB)]</a></td> </tr> <tr> <td class="tg-0pky">Pietro Perona</td> <td class="tg-0pky">Manipulation teaches perception about abstraction</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Perona2022-NumberSense.pdf'>[PDF(52MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Pietro Perona.mp4'>[MP4(4.2GB)]</a></td> </tr> <tr> <td class="tg-0pky" colspan="4"><b>Friday Sep 2nd. 2022</b></td> </tr> <tr> <td class="tg-0pky" rowspan="3">5. New approaches to learning<br>(chair: Andrew Fitzgibbon)</td> <td class="tg-0pky">Herv茅 J茅gou</td> <td class="tg-0pky">Learning image representations with coarse and instance-level supervision</td> <td class="tg-0pky">--</td> </tr> <tr> <td class="tg-0pky">Tinne Tuytelaars</td> <td class="tg-0pky">Unsupervised vision and language grammar induction</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Tuytelaars_AZ Festschrift Aug2022.pdf'>[PDF(3MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Tinne Tuytelaars.mp4'>[MP4(3.2GB)]</a></td> </tr> <tr> <td class="tg-0pky">Rob Fergus</td> <td class="tg-0pky">Data augmentation for image-based reinforcement learning</td> <td class="tg-0pky">--</td> </tr> <tr> <td class="tg-0pky" rowspan="6">6. Student/postdoc talks<br>(chair: Andrew Zisserman)</td> <td class="tg-0pky">Tengda Han</td> <td class="tg-0pky">Temporal Alignment Networks for Long-term Video</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/TengdaHan-AZ-Fest.pdf'>[PDF(6MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Tengda Han.mp4'>[MP4(2.1GB)]</a></td> </tr> <tr> <td class="tg-0pky">Liliane Momeni</td> <td class="tg-0pky">What can we learn from watching Sign Language interpreted TV?</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Momeni_010922AZ talk.pdf'>[PDF(40MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Liliane Momeni.mp4'>[MP4(2.6GB)]</a></td> </tr> <tr> <td class="tg-0pky">Chuhan Zhang</td> <td class="tg-0pky">Is an object-centric video representation beneficial for transfer?</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Zhang object-centric presentation @ AZ's festschrift.pdf'>[PDF(3MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Chuhan Zhang.mp4'>[MP4(2.0GB)]</a></td> </tr> <tr> <td class="tg-0pky">Rhydian Windsor</td> <td class="tg-0pky">Multiple Modalites in Spinal Medical Imaging</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Winsor-azfestschrifttalk.pdf'>[PDF(15MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Rhydian Windsor.mp4'>[MP4(1.9GB)]</a></td> </tr> <tr> <td class="tg-0pky">Shangzhe Wu</td> <td class="tg-0pky">Learning 3D Objects in the Wild</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Wu_3d_objects.pdf'>[PDF(7MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Shengzhe Wu.mp4'>[MP4(1.8GB)]</a></td> </tr> <tr> <td class="tg-0pky">Tomas Jakab</td> <td class="tg-0pky">Discovering the structure of objects with autoencoders</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Jakab_object_structure.pdf'>[PDF(8MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Tomas Jakab.mp4'>[MP4(2.2GB)]</a></td> </tr> <tr> <td class="tg-0pky" rowspan="3">7. 3D scene perception<br>(chair: Roberto Cipolla)</td> <td class="tg-0pky">Andrea Vedaldi</td> <td class="tg-0pky">Unsupervised 3D perception</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/vedaldi22az.key_eXpress.pdf'>[PDF(6MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Andrea Vedaldi.mp4'>[MP4(3.8GB)]</a></td> </tr> <tr> <td class="tg-0pky">Patrick Perez</td> <td class="tg-0pky">Gaining confidence in an uncertain (visual) world</td> <td class="tg-0pky">--</td> </tr> <tr> <td class="tg-0pky">Alyosha Efros</td> <td class="tg-0pky">Reconciling CV with PR: the quest for qualitative scene understanding</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Efros_AZ Fest 2022.pdf'>[PDF(15MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Alexei Efros.mp4'>[MP4(3.7GB)]</a></td> </tr> <tr> <td class="tg-0pky" rowspan="3">8. Victory lap<br>(chair: Philip Torr)</td> <td class="tg-0pky">Michal Irani</td> <td class="tg-0pky">Reconstructing Training Data from Trained Neural Networks</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Irani_slides.pdf'>[PDF(6MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Michal Irani.mp4'>[MP4(3.7GB)]</a></td> </tr> <tr> <td class="tg-0pky">Bill Freeman</td> <td class="tg-0pky">Finding pareidolic Andrew</td> <td class="tg-0pky"><a href='https://thor.robots.ox.ac.uk/azfest/slides/public/Freeman_AZpareidolia.pdf'>[PDF(254MB)]</a> <a href='https://thor.robots.ox.ac.uk/azfest/recordings/public/Bill Freeman.mp4'>[MP4(3.4GB)]</a></td> </tr> <tr> <td class="tg-0pky">Andrew Zisserman</td> <td class="tg-0pky">Wrap up</td> <td class="tg-0pky">--</td> </tr> </tbody> </table> <br> <h2> Photo Gallery</h2> <a href="photos/group/IMG_2941.JPG">[Group photo]</a>   <a href="gallery_SEP1_AM.html">[Sep-01-AM]</a>   <a href="gallery_SEP1_PM.html">[Sep-01-PM]</a>   <a href="gallery_SEP2_AM.html">[Sep-02-AM]</a>   <a href="gallery_SEP2_PM.html">[Sep-02-PM]</a>   <a href="name_badge.html">[Name badge]</a>   </a> <br> <br> <br> <h2>Acknowledgement</h2> <p> Thanks to the organizers, all the student volunteers, presenters, chairs and all the attendees for making this event happen. </p> <p> Credits: <ul> <li>Key organizers: Andrew Blake, Jenny Hu, Andrew Fitzgibbon, Phil Torr</li> <li>Name badge and website design, head of volunteers: Tengda Han</li> <li>Photos: Abhishek Dutta, Junyu Xie, Robert McCraith</li> <li>Recording: Jaesung Huh, Robert McCraith</li> <li>Video editing: Robert McCraith</li> <li>Reception and other support: Charig Yang, Chuhan Zhang, David Pinto, Guanqi Zhan, Marian Longa, Prannay Kaul, Shangzhe Wu, Tim Franzmeyer, Tomas Jakab </li> </ul> </p> <br> <br> <br> <br> <br> </body> <footer> <hr> <p> © Visual Geometry Group: <a href="https://www.robots.ox.ac.uk/~vgg/">https://www.robots.ox.ac.uk/~vgg/</a>. All rights reserved. </p> <p>Contact: az@robots.ox.ac.uk</p> <br><br><br><br> </footer> </html> <!-- CSS --> <style> body { font-family: 'Muli', sans-serif; max-width: 1200px; margin: auto; zoom: 80%; font-size: 18px; }.dark{ background-color: #222; color: #e6e6e6; } pre.terminal { color: #222; background-color: #fafafa; } .h1, .h2, .h3, .h4, .h5, .h6, h1, h2, h3, h4, h5, h6 { color: #043bb9 } p {font-size: 18px;} a {text-decoration: none;} table { border-collapse: collapse; width: 100%; }.dark{ background-color: #222; color: #e6e6e6; } .tg td { font-size: 16px; } th, td { text-align: left; padding: 8px; } /* tr:nth-child(even) { background-color: #f5f5f5; }.dark{ background-color: #222; color: #e6e6e6; } */ pre { font-size: large; background-color: #f5f5f5; } footer { text-align: right; } </style>