CINXE.COM
Matthew Tancik
<!DOCTYPE html><!-- Last Published: Sun Dec 15 2024 00:58:13 GMT+0000 (Coordinated Universal Time) --><html data-wf-domain="www.matthewtancik.com" data-wf-page="5a1cad67b5d43a0001428a10" data-wf-site="51e0d73d83d06baa7a00000f"><head><meta charset="utf-8"/><title>Matthew Tancik</title><meta content="Personal website of Matthew Tancik featuring projects and artwork." name="description"/><meta content="width=device-width, initial-scale=1" name="viewport"/><link href="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/css/matthewtancik.webflow.68c2f4e85.min.css" rel="stylesheet" type="text/css"/><link href="https://fonts.googleapis.com" rel="preconnect"/><link href="https://fonts.gstatic.com" rel="preconnect" crossorigin="anonymous"/><script src="https://ajax.googleapis.com/ajax/libs/webfont/1.6.26/webfont.js" type="text/javascript"></script><script type="text/javascript">WebFont.load({ google: { families: ["Lato:100,100italic,300,300italic,400,400italic,700,700italic,900,900italic","Montserrat:100,100italic,200,200italic,300,300italic,400,400italic,500,500italic,600,600italic,700,700italic,800,800italic,900,900italic","Ubuntu:300,300italic,400,400italic,500,500italic,700,700italic","Open Sans:300,300italic,400,400italic,600,600italic,700,700italic,800,800italic","Changa One:400,400italic","Varela Round:400","Bungee Shade:regular","Roboto:300,regular,500","Bungee Outline:regular"] }});</script><script type="text/javascript">!function(o,c){var n=c.documentElement,t=" w-mod-";n.className+=t+"js",("ontouchstart"in o||o.DocumentTouch&&c instanceof DocumentTouch)&&(n.className+=t+"touch")}(window,document);</script><link href="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5aaddbdee8d43ceeae2f2e4c_favicon-32x32.png" rel="shortcut icon" type="image/x-icon"/><link href="https://y7v4p6k4.ssl.hwcdn.net/51db7fcf29a6f36b2a000001/51e06d302f5394c87600002a_webclip-comet.png" rel="apple-touch-icon"/><script async="" src="https://www.googletagmanager.com/gtag/js?id=G-51T1ZNPMZT"></script><script type="text/javascript">window.dataLayer = window.dataLayer || [];function gtag(){dataLayer.push(arguments);}gtag('js', new Date());gtag('config', 'G-51T1ZNPMZT', {'anonymize_ip': false});</script><style> .wf-loading * { opacity: 0; } </style><META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW"></head><body><div class="header-row w-row"><div class="column-4 w-col w-col-5"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/601386af5416bcd8b2a567cb_logo_1.svg" loading="lazy" style="-webkit-transform:translate3d(24px, 0, 0) scale3d(1, 1, 1) rotateX(0) rotateY(0) rotateZ(0) skew(0, 0);-moz-transform:translate3d(24px, 0, 0) scale3d(1, 1, 1) rotateX(0) rotateY(0) rotateZ(0) skew(0, 0);-ms-transform:translate3d(24px, 0, 0) scale3d(1, 1, 1) rotateX(0) rotateY(0) rotateZ(0) skew(0, 0);transform:translate3d(24px, 0, 0) scale3d(1, 1, 1) rotateX(0) rotateY(0) rotateZ(0) skew(0, 0);opacity:0" alt="" class="logo_svg"/><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/601386dde702f1b6584fa36d_logo_2.svg" loading="lazy" style="-webkit-transform:translate3d(-24px, 0, 0) scale3d(1, 1, 1) rotateX(0) rotateY(0) rotateZ(0) skew(0, 0);-moz-transform:translate3d(-24px, 0, 0) scale3d(1, 1, 1) rotateX(0) rotateY(0) rotateZ(0) skew(0, 0);-ms-transform:translate3d(-24px, 0, 0) scale3d(1, 1, 1) rotateX(0) rotateY(0) rotateZ(0) skew(0, 0);transform:translate3d(-24px, 0, 0) scale3d(1, 1, 1) rotateX(0) rotateY(0) rotateZ(0) skew(0, 0);opacity:0" alt="" class="logo_svg logo_over"/></div><div class="empty-column w-col w-col-1"></div><div class="nav-bar w-col w-col-6"><div class="nav-header w-container"><div class="w-row"><div class="w-col w-col-4"><a href="/about-me" class="nav-link">About me</a></div><div class="w-col w-col-4"><a href="/art" class="nav-link">Art/Projects</a></div><div class="w-col w-col-4"><a href="/" aria-current="page" class="nav-link last highlight w--current">Publications</a></div></div></div></div></div><div class="container w-container"><h2 class="pub_header">Publications</h2><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6641080896ea68a4a4fe2c5d_Screen%20Shot%202024-05-12%20at%2011.17.31%20AM.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6641080896ea68a4a4fe2c5d_Screen%20Shot%202024-05-12%20at%2011.17.31%20AM-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6641080896ea68a4a4fe2c5d_Screen%20Shot%202024-05-12%20at%2011.17.31%20AM.png 523w" alt="" class="pub_image"/><div class="number pub_title">GARField: Group Anything with Radiance Fields<br/></div><a href="https://scholar.google.com/citations?user=ODr5lMgAAAAJ&hl=en" target="_blank" class="coming-soon names">Chung Min Kim*,</a><a href="https://github.com/WMXwmxwalter" target="_blank" class="coming-soon names">Mingxuan Wu*,</a><a href="https://kerrj.github.io/" target="_blank" class="coming-soon names">Justin Kerr*,</a><a href="https://goldberg.berkeley.edu/" target="_blank" class="coming-soon names">Ken Goldberg,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://people.eecs.berkeley.edu/~kanazawa/" target="_blank" class="coming-soon names">Angjoo Kanazawa</a><p class="paragraph">CVPR (2024)</p><p class="paragraph newline"></p><a href="https://arxiv.org/abs/2401.09419" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://www.garfield.studio/" target="_blank" class="coming-soon lenselinks">Project Website</a><p class="paragraph-2 pub_para">Hierarchical grouping in 3D by training a scale-conditioned affinity field from multi-level masks</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/63e94c5f3f712739574ebd52_nerfstudio_ex.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/63e94c5f3f712739574ebd52_nerfstudio_ex-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/63e94c5f3f712739574ebd52_nerfstudio_ex-p-800.png 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/63e94c5f3f712739574ebd52_nerfstudio_ex.png 1046w" alt="" class="pub_image"/><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/63e94b983f71274f494eb6db_logo_full_logo_light.png" loading="lazy" width="190" sizes="(max-width: 479px) 76vw, 190px" alt="" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/63e94b983f71274f494eb6db_logo_full_logo_light-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/63e94b983f71274f494eb6db_logo_full_logo_light-p-800.png 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/63e94b983f71274f494eb6db_logo_full_logo_light-p-1080.png 1080w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/63e94b983f71274f494eb6db_logo_full_logo_light.png 2017w" class="nerfstudio_logo"/><a href="#" class="coming-soon names my_name">Matthew Tancik*,</a><a href="https://ethanweber.me/" target="_blank" class="coming-soon names">Ethan Weber*,</a><a href="http://people.eecs.berkeley.edu/~evonne_ng/" target="_blank" class="coming-soon names">Evonne Ng*,</a><a href="https://www.liruilong.cn/" target="_blank" class="coming-soon names">Ruilong Li,</a><a href="https://github.com/brentyi" target="_blank" class="coming-soon names">Brent Yi,</a><a href="https://kerrj.github.io/" target="_blank" class="coming-soon names">Justin Kerr,</a><a href="https://www.linkedin.com/in/terrance-wang/" target="_blank" class="coming-soon names">Terrance Wang,</a><a href="https://akristoffersen.com/" target="_blank" class="coming-soon names">Alexander Kristoffersen,</a><a href="https://www.linkedin.com/in/jake-austin-371770199/" target="_blank" class="coming-soon names">Jake Austin,</a><a href="https://kamyar.io/" target="_blank" class="coming-soon names">Kamyar Salahi,</a><a href="https://abhikahuja.com/" target="_blank" class="coming-soon names">Abhik Ahuja,</a><a href="https://mcallisterdavid.com/" target="_blank" class="coming-soon names">David McAllister,</a><a href="https://people.eecs.berkeley.edu/~kanazawa/" target="_blank" class="coming-soon names">Angjoo Kanazawa</a><p class="paragraph">SIGGRAPH (2023)</p><p class="paragraph newline"></p><a href="https://docs.nerf.studio/" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://github.com/nerfstudio-project/nerfstudio" target="_blank" class="coming-soon lenselinks">Github</a><a href="https://arxiv.org/abs/2302.04264" target="_blank" class="coming-soon lenselinks">arXiv</a><p class="paragraph-2 pub_para">A Modular Framework for Neural Radiance Field Development.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/64168138d18f1b41a6765536_lerf_meta_img.jpeg" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/64168138d18f1b41a6765536_lerf_meta_img-p-500.jpeg 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/64168138d18f1b41a6765536_lerf_meta_img-p-800.jpeg 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/64168138d18f1b41a6765536_lerf_meta_img-p-1080.jpeg 1080w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/64168138d18f1b41a6765536_lerf_meta_img-p-1600.jpeg 1600w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/64168138d18f1b41a6765536_lerf_meta_img-p-2000.jpeg 2000w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/64168138d18f1b41a6765536_lerf_meta_img-p-2600.jpeg 2600w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/64168138d18f1b41a6765536_lerf_meta_img-p-3200.jpeg 3200w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/64168138d18f1b41a6765536_lerf_meta_img.jpeg 3456w" alt="" class="pub_image"/><div class="number pub_title">LERF:聽Language Embedded Radiance Fields<br/></div><a href="https://kerrj.github.io/" target="_blank" class="coming-soon names">Justin Kerr*,</a><a href="https://scholar.google.com/citations?user=ODr5lMgAAAAJ&hl=en" target="_blank" class="coming-soon names">Chung Min Kim*,</a><a href="https://goldberg.berkeley.edu/" target="_blank" class="coming-soon names">Ken Goldberg,</a><a href="https://people.eecs.berkeley.edu/~kanazawa/" target="_blank" class="coming-soon names">Angjoo Kanazawa,</a><a href="#" class="coming-soon names my_name">Matthew Tancik</a><p class="paragraph">ICCV (2023) <span class="extra_color">Oral</span></p><p class="paragraph newline"></p><a href="https://arxiv.org/abs/2303.09553" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://www.lerf.io/" target="_blank" class="coming-soon lenselinks">Project Website</a><p class="paragraph-2 pub_para">Grounding CLIP vectors volumetrically inside a NeRF allows flexible natural language queries in 3D.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6428b896a3e86ecd0c2451a0_albert_einstein.jpg" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6428b896a3e86ecd0c2451a0_albert_einstein-p-500.jpg 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6428b896a3e86ecd0c2451a0_albert_einstein-p-800.jpg 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6428b896a3e86ecd0c2451a0_albert_einstein-p-1080.jpg 1080w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6428b896a3e86ecd0c2451a0_albert_einstein.jpg 1596w" alt="" class="pub_image"/><div class="number pub_title">Instruct-NeRF2NeRF:聽Editing 3D Scenes with Instructions<br/></div><a href="https://www.ayaanzhaque.me/" target="_blank" class="coming-soon names">Ayaan Haque,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://people.eecs.berkeley.edu/~efros/" target="_blank" class="coming-soon names">Alexei Efros,</a><a href="https://holynski.org/" target="_blank" class="coming-soon names">Aleksander Ho艂y艅ski,</a><a href="https://people.eecs.berkeley.edu/~kanazawa/" target="_blank" class="coming-soon names">Angjoo Kanazawa,</a><p class="paragraph">ICCV (2023) <span class="extra_color">Oral</span></p><p class="paragraph newline"></p><a href="https://arxiv.org/abs/2303.12789" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://instruct-nerf2nerf.github.io/" target="_blank" class="coming-soon lenselinks">Project Website</a><p class="paragraph-2 pub_para">Instruct-NeRF2NeRF enables instruction-based editing of NeRFs via a 2D diffusion model.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/647cd670d3cfe3869a455406_nerfacc.png" alt="" class="pub_image"/><div class="number pub_title">NerfAcc: Efficient Sampling Accelerates NeRFs<br/></div><a href="https://www.liruilong.cn/" target="_blank" class="coming-soon names">Ruilong Li,</a><a href="https://hangg7.com/" target="_blank" class="coming-soon names">Hang Gao,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://people.eecs.berkeley.edu/~kanazawa/" target="_blank" class="coming-soon names">Angjoo Kanazawa</a><p class="paragraph">ICCV (2023)</p><p class="paragraph newline"></p><a href="https://arxiv.org/abs/2210.04847" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://www.nerfacc.com/en/stable/" target="_blank" class="coming-soon lenselinks">Project Website</a><p class="paragraph-2 pub_para">NerfAcc integrates advanced efficient sampling techniques that lead to significant speedups in training various recent NeRF papers with minimal modifications to existing codebases.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/647cd76241650b4e16ee77b5_Screen%20Shot%202023-06-04%20at%2011.26.15%20AM.jpg" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/647cd76241650b4e16ee77b5_Screen%20Shot%202023-06-04%20at%2011.26.15%20AM-p-500.jpg 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/647cd76241650b4e16ee77b5_Screen%20Shot%202023-06-04%20at%2011.26.15%20AM-p-800.jpg 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/647cd76241650b4e16ee77b5_Screen%20Shot%202023-06-04%20at%2011.26.15%20AM.jpg 880w" alt="" class="pub_image"/><div class="number pub_title">Nerfbusters:聽Removing Ghostly Artifacts from Casually Captured NeRFs<br/></div><a href="https://frederikwarburg.github.io/" class="coming-soon names">Frederik Warburg,</a><a href="https://ethanweber.me/" target="_blank" class="coming-soon names">Ethan Weber*,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://holynski.org/" target="_blank" class="coming-soon names">Aleksander Ho艂y艅ski,</a><a href="https://people.eecs.berkeley.edu/~kanazawa/" target="_blank" class="coming-soon names">Angjoo Kanazawa</a><p class="paragraph">ICCV (2023)</p><p class="paragraph newline"></p><a href="https://arxiv.org/abs/2304.10532" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://ethanweber.me/nerfbusters/" target="_blank" class="coming-soon lenselinks">Project Website</a><p class="paragraph-2 pub_para">Nerfbusters proposes an evaluation procedure for in-the-wild NeRFs, and presents a method that uses a 3D diffusion prior to clean NeRFs.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6416869d826a428280ecb797_evonerf.jpg" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6416869d826a428280ecb797_evonerf-p-500.jpg 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6416869d826a428280ecb797_evonerf-p-800.jpg 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6416869d826a428280ecb797_evonerf.jpg 942w" alt="" class="pub_image"/><div class="number pub_title">Evo-NeRF: Evolving NeRF for Sequential Robot Grasping<br/></div><a href="https://kerrj.github.io/" target="_blank" class="coming-soon names">Justin Kerr,</a><a href="https://max-fu.github.io/" target="_blank" class="coming-soon names">Letian Fu,</a><a href="https://sites.google.com/site/huanghuang9729/home" target="_blank" class="coming-soon names">Huang Huang,</a><a href="https://yahavigal.github.io/" target="_blank" class="coming-soon names">Yahav Avigal,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://ichnow.ski/" target="_blank" class="coming-soon names">Jeffrey Ichnowski,</a><a href="https://people.eecs.berkeley.edu/~kanazawa/" target="_blank" class="coming-soon names">Angjoo Kanazawa,</a><a href="https://goldberg.berkeley.edu/" target="_blank" class="coming-soon names">Ken Goldberg</a><p class="paragraph newline"></p><p class="paragraph">CoRL (2022) <span class="extra_color">Oral</span></p><a href="https://openreview.net/forum?id=Bxr45keYrf" target="_blank" class="coming-soon lenselinks">OpenReview</a><a href="https://sites.google.com/view/evo-nerf" target="_blank" class="coming-soon lenselinks">Project Website</a><p class="paragraph-2 pub_para">We show that by training NeRFs incrementally over a stream of images, they can be used robotics grasping tasks. They are particularly useful in tasks involving transparent objects which are traditionally hard to compute geometry for.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/62e357d32fb0a05bc553e621_sitcoms2.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/62e357d32fb0a05bc553e621_sitcoms2-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/62e357d32fb0a05bc553e621_sitcoms2.png 600w" alt="" class="pub_image"/><div class="number pub_title">The One Where They Reconstructed<br/>3D Humans and Environments in TV Shows<br/></div><a href="https://geopavlakos.github.io/" target="_blank" class="coming-soon names">Georgios Pavlakos*,</a><a href="https://ethanweber.me/" target="_blank" class="coming-soon names">Ethan Weber*,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://people.eecs.berkeley.edu/~kanazawa/" target="_blank" class="coming-soon names">Angjoo Kanazawa</a><p class="paragraph newline"></p><p class="paragraph">ECCV (2022)</p><a href="https://arxiv.org/abs/2207.14279" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="http://ethanweber.me/sitcoms3D/" target="_blank" class="coming-soon lenselinks">Project Website</a><p class="paragraph-2 pub_para">We show that is it possible to reconstruct TV show in 3D. Further, reasoning about humans and their environment in 3D enables a broad range of downstream applications: re-identification, gaze estimation, cinematography and image editing. </p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/62047a1f1339679bac264ae1_grace_large_dark_blue.jpg" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/62047a1f1339679bac264ae1_grace_large_dark_blue-p-1080.jpeg 1080w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/62047a1f1339679bac264ae1_grace_large_dark_blue.jpg 1200w" alt="" class="pub_image"/><div class="number pub_title">Block-NeRF: Scalable Large Scene Neural View Synthesis<br/></div><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="http://casser.io/" target="_blank" class="coming-soon names">Vincent Casser,</a><a href="https://sites.google.com/site/skywalkeryxc/" target="_blank" class="coming-soon names">Xinchen Yan,</a><a href="https://scholar.google.com/citations?user=5mJUkI4AAAAJ&hl=en" target="_blank" class="coming-soon names">Sabeek Pradhan,</a><a href="https://bmild.github.io/" target="_blank" class="coming-soon names">Ben Mildenhall,</a><a href="https://pratulsrinivasan.github.io/" target="_blank" class="coming-soon names">Pratul P. Srinivasan,</a><a href="https://jonbarron.info/" target="_blank" class="coming-soon names">Jonathan T. Barron,</a><a href="https://www.henrikkretzschmar.com/" target="_blank" class="coming-soon names">Henrik Kretzschmar</a><p class="paragraph newline"></p><p class="paragraph">CVPR (2022) <span class="extra_color">Oral</span></p><a href="https://arxiv.org/abs/2202.05263" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://waymo.com/research/block-nerf/" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://www.youtube.com/watch?v=6lGMCAzBzOQ" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">We present a variant of Neural Radiance Fields that can represent large-scale environments. We build a grid of Block-NeRFs from 2.8 million images to create the largest neural scene representation to date, capable of rendering an entire neighborhood of San Francisco.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/61b2ea116f602466bd9f244b_real_lego.jpg" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/61b2ea116f602466bd9f244b_real_lego-p-500.jpeg 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/61b2ea116f602466bd9f244b_real_lego-p-1080.jpeg 1080w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/61b2ea116f602466bd9f244b_real_lego.jpg 1474w" alt="" class="pub_image"/><div class="number pub_title">Plenoxels: Radiance Fields without Neural Networks<br/></div><a href="https://alexyu.net/" target="_blank" class="coming-soon names">Alex Yu*,</a><a href="https://people.eecs.berkeley.edu/~sfk/" target="_blank" class="coming-soon names">Sara Fridovich-Keil*,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://www.linkedin.com/in/qinhong-chen/" target="_blank" class="coming-soon names">Qinhong Chen,</a><a href="http://people.eecs.berkeley.edu/~brecht/" target="_blank" class="coming-soon names">Benjamin Recht,</a><a href="https://people.eecs.berkeley.edu/~kanazawa/" target="_blank" class="coming-soon names">Angjoo Kanazawa</a><p class="paragraph newline"></p><p class="paragraph">CVPR (2022) <span class="extra_color">Oral</span></p><a href="https://arxiv.org/abs/2112.05131" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://alexyu.net/plenoxels" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://youtu.be/ElnuwpQEqTA" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">We propose a view-dependent sparse voxel model, Plenoxel (plenoptic volume element), that can optimize to the same fidelity as Neural Radiance Fields (NeRFs) without any neural networks. Our typical optimization time is 11 minutes on a single GPU, a speedup of two orders of magnitude compared to NeRF.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/60620cc364bf2b1b75b34855_plenoctrees_demo.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/60620cc364bf2b1b75b34855_plenoctrees_demo-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/60620cc364bf2b1b75b34855_plenoctrees_demo.png 749w" alt="" class="pub_image"/><div class="number pub_title">PlenOctrees for Real-time Rendering of Neural Radiance Fields<br/></div><a href="https://alexyu.net/" target="_blank" class="coming-soon names">Alex Yu,</a><a href="http://www.liruilong.cn/" target="_blank" class="coming-soon names">Ruilong Li,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="http://www.hao-li.com/" target="_blank" class="coming-soon names">Hao Li,</a><a href="https://scholar.google.com/citations?user=6H0mhLUAAAAJ&hl=en" target="_blank" class="coming-soon names">Ren Ng,</a><a href="https://people.eecs.berkeley.edu/~kanazawa/" target="_blank" class="coming-soon names">Angjoo Kanazawa</a><p class="paragraph newline"></p><p class="paragraph">ICCV (2021) <span class="extra_color">Oral</span></p><a href="https://arxiv.org/abs/2103.14024" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://alexyu.net/plenoctrees" target="_blank" class="coming-soon lenselinks">Demo / Project Website</a><a href="https://www.youtube.com/watch?v=obrmH1T5mfI" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">We introduce a method to render Neural Radiance Fields (NeRFs) in real time without sacrificing quality. Our method preserves the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/60620a9d3d0e0cd615defe0c_mipnerf_ipe.png" alt="" class="pub_image"/><div class="number pub_title">Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields<br/></div><a href="https://jonbarron.info/" target="_blank" class="coming-soon names">Jonathan T. Barron,</a><a href="https://bmild.github.io/" target="_blank" class="coming-soon names">Ben Mildenhall,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://phogzone.com/" target="_blank" class="coming-soon names">Peter Hedman,</a><a href="http://www.ricardomartinbrualla.com/" target="_blank" class="coming-soon names">Ricardo Martin-Brualla,</a><a href="https://pratulsrinivasan.github.io/" target="_blank" class="coming-soon names">Pratul P. Srinivasan</a><p class="paragraph newline"></p><p class="paragraph">ICCV (2021) <span class="extra_color">Oral - Best Paper Honorable Mention</span></p><a href="https://arxiv.org/abs/2103.13415" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="http://jonbarron.info/mipnerf" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://youtu.be/EpH175PY1A0" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. We prefilter the positional encoding function and train NeRF to generate anti-aliased renderings.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6067ec19b9a185c7eddcbdc8_dietnerf.jpg" width="200" height="200" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, 200px" alt="" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6067ec19b9a185c7eddcbdc8_dietnerf-p-500.jpeg 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/6067ec19b9a185c7eddcbdc8_dietnerf.jpg 600w" class="pub_image"/><div class="number pub_title">Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis<br/></div><a href="https://www.ajayj.com/" target="_blank" class="coming-soon names">Ajay Jain,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://people.eecs.berkeley.edu/~pabbeel/" target="_blank" class="coming-soon names">Pieter Abbeel</a><p class="paragraph newline"></p><p class="paragraph">ICCV (2021)</p><a href="https://arxiv.org/abs/2104.00677" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://www.ajayj.com/dietnerf" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://youtu.be/RF_3hsNizqw" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">We introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses. Our semantic loss allows us to supervise DietNeRF from arbitrary poses. We extract these semantics using a pre-trained visual encoder such as CLIP.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5fc91f737da008318e405b20_trevi_web.jpeg" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5fc91f737da008318e405b20_trevi_web-p-500.jpeg 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5fc91f737da008318e405b20_trevi_web.jpeg 836w" alt="" class="pub_image"/><div class="number pub_title">Learned Initializations for Optimizing Coordinate-Based Neural Representations<br/></div><a href="#" class="coming-soon names my_name">Matthew Tancik*,</a><a href="https://bmild.github.io/" target="_blank" class="coming-soon names">Ben Mildenhall*,</a><a href="#" class="coming-soon names">Terrance Wang,</a><a href="#" class="coming-soon names">Divi Schmidt,</a><a href="https://pratulsrinivasan.github.io/" target="_blank" class="coming-soon names">Pratul P. Srinivasan,</a><a href="https://jonbarron.info/" target="_blank" class="coming-soon names">Jonathan T. Barron,</a><a href="https://scholar.google.com/citations?user=6H0mhLUAAAAJ&hl=en" target="_blank" class="coming-soon names">Ren Ng</a><p class="paragraph newline"></p><p class="paragraph">CVPR (2021) <span class="extra_color">Oral</span></p><a href="https://arxiv.org/abs/2012.02189" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="/learnit" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://github.com/tancik/learnit" target="_blank" class="coming-soon lenselinks">Code</a><a href="https://youtu.be/A-r9itCzcyo" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">We find that standard meta-learning algorithms for weight initialization can enable faster convergence during optimization and can serve as a strong prior over the signal class being modeled, resulting in better generalization when only partial observations of a given signal are available.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5fc93a334a602031427d0591_000034.jpg" alt="" class="pub_image"/><div class="number pub_title">pixelNeRF: Neural Radiance Fields from One or Few Images<br/></div><a href="https://alexyu.net/" target="_blank" class="coming-soon names">Alex Yu,</a><a href="https://people.eecs.berkeley.edu/~vye/" target="_blank" class="coming-soon names">Vickie Ye,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://people.eecs.berkeley.edu/~kanazawa/" target="_blank" class="coming-soon names">Angjoo Kanazawa</a><p class="paragraph newline"></p><p class="paragraph">CVPR (2021)<span class="extra_color"></span></p><a href="http://arxiv.org/abs/2012.02190" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://alexyu.net/pixelnerf/" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://github.com/sxyu/pixel-nerf" target="_blank" class="coming-soon lenselinks">Code</a><a href="https://youtu.be/voebZx7f32g" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">We propose a learning framework that predicts a continuous neural scene representation from one or few input images by conditioning on image features encoded by a convolutional neural network.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5fceacd455c9361965425994_hotdog.jpg" alt="" class="pub_image"/><div class="number pub_title">NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis<br/></div><a href="https://pratulsrinivasan.github.io/" target="_blank" class="coming-soon names">Pratul P. Srinivasan,</a><a href="https://boyangdeng.com/" target="_blank" class="coming-soon names">Boyang Deng,</a><a href="http://people.csail.mit.edu/xiuming/" target="_blank" class="coming-soon names">Xiuming Zhang,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://bmild.github.io/" target="_blank" class="coming-soon names">Ben Mildenhall,</a><a href="https://jonbarron.info/" target="_blank" class="coming-soon names">Jonathan T. Barron,</a><p class="paragraph newline"></p><p class="paragraph">CVPR (2021)<span class="extra_color"></span></p><a href="https://arxiv.org/abs/2012.03927" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://people.eecs.berkeley.edu/~pratul/nerv/" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://www.youtube.com/watch?v=4XyDdvhhjVo" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">We recover relightable NeRF-like models using neural approximations of expensive visibility integrals, so we can simulate complex volumetric light transport during training.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5f396b79a5fdcb7ba61734bd_fourfeat.jpeg" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5f396b79a5fdcb7ba61734bd_fourfeat-p-500.jpeg 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5f396b79a5fdcb7ba61734bd_fourfeat-p-800.jpeg 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5f396b79a5fdcb7ba61734bd_fourfeat.jpeg 1024w" alt="" class="pub_image"/><div class="number pub_title">Fourier Features Let Networks Learn<br/>High Frequency Functions in Low Dimensional Domains<br/></div><a href="#" class="coming-soon names my_name">Matthew Tancik*,</a><a href="https://pratulsrinivasan.github.io/" target="_blank" class="coming-soon names">Pratul P. Srinivasan*,</a><a href="https://bmild.github.io/" target="_blank" class="coming-soon names">Ben Mildenhall*,</a><a href="https://people.eecs.berkeley.edu/~sfk/" target="_blank" class="coming-soon names">Sara Fridovich-Keil,</a><a href="https://www.csua.berkeley.edu/~rnithin/" target="_blank" class="coming-soon names">Nithin Raghavan,</a><a href="https://scholar.google.com/citations?user=lvA86MYAAAAJ&hl=en" target="_blank" class="coming-soon names">Utkarsh Singhal,</a><a href="http://cseweb.ucsd.edu/~ravir/" target="_blank" class="coming-soon names">Ravi Ramamoorthi,</a><a href="https://jonbarron.info/" target="_blank" class="coming-soon names">Jonathan T. Barron,</a><a href="https://scholar.google.com/citations?user=6H0mhLUAAAAJ&hl=en" target="_blank" class="coming-soon names">Ren Ng</a><p class="paragraph newline"></p><p class="paragraph">NeurIPS (2020) <span class="extra_color">Spotlight</span></p><a href="https://arxiv.org/abs/2006.10739" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://bmild.github.io/fourfeat/" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://github.com/tancik/fourier-feature-networks" target="_blank" class="coming-soon lenselinks">Code</a><a href="https://youtu.be/nVA6K6Sn2S4" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains. These results shed light on recent advances in computer vision and graphics that achieve state-of-the-art results by using MLPs to represent complex 3D objects and scenes. </p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5e73b3b80a912b9d1033f5a7_hotdog_cropped.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5e73b3b80a912b9d1033f5a7_hotdog_cropped-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5e73b3b80a912b9d1033f5a7_hotdog_cropped.png 714w" alt="" class="pub_image"/><div class="number pub_title">NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis<br/></div><a href="https://bmild.github.io/" target="_blank" class="coming-soon names">Ben Mildenhall*,</a><a href="https://pratulsrinivasan.github.io/" target="_blank" class="coming-soon names">Pratul P. Srinivasan*,</a><a href="#" class="coming-soon names my_name">Matthew Tancik*,</a><a href="https://jonbarron.info/" target="_blank" class="coming-soon names">Jonathan T. Barron,</a><a href="http://cseweb.ucsd.edu/~ravir/" target="_blank" class="coming-soon names">Ravi Ramamoorthi,</a><a href="https://scholar.google.com/citations?user=6H0mhLUAAAAJ&hl=en" target="_blank" class="coming-soon names">Ren Ng</a><p class="paragraph newline"></p><p class="paragraph">ECCV (2020) <span class="extra_color">Oral - Best Paper Honorable Mention</span></p><a href="https://arxiv.org/abs/2003.08934" class="coming-soon lenselinks">arXiv</a><a href="/nerf" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://github.com/bmild/nerf" class="coming-soon lenselinks">Code</a><a href="https://youtu.be/JuH79E8rdKc" target="_blank" class="coming-soon lenselinks">Video</a><a href="https://github.com/yenchenlin/awesome-NeRF" target="_blank" class="coming-soon lenselinks">Follow-ups</a><p class="paragraph-2 pub_para">We propose an algorithm that represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (胃, 蠁)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. With this representation we achieve state-of-the-art results for synthesizing novel views of scenes from a sparse set of input views.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5ca3ab6e03f6ab43b9da9c82_overview-01.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5ca3ab6e03f6ab43b9da9c82_overview-01-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5ca3ab6e03f6ab43b9da9c82_overview-01-p-800.png 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5ca3ab6e03f6ab43b9da9c82_overview-01.png 867w" alt="" class="pub_image"/><div class="number pub_title">StegaStamp: Invisible Hyperlinks in Physical Photographs</div><a href="#" class="coming-soon names my_name">Matthew Tancik*,</a><a href="https://bmild.github.io/" target="_blank" class="coming-soon names">Ben Mildenhall*,</a><a href="https://scholar.google.com/citations?user=6H0mhLUAAAAJ&hl=en" target="_blank" class="coming-soon names">Ren Ng</a><p class="paragraph">CVPR (2020)</p><a href="http://arxiv.org/abs/1904.05343" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="/stegastamp" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://github.com/tancik/StegaStamp" target="_blank" class="coming-soon lenselinks">Code</a><a href="https://youtube.com/watch?v=E8OqgNDBGO0" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">We present a deep learning method to hide imperceptible data into printed images that can be recovered after photographing the print. The method is robust to corruptions like shadows, occlusions, noice, and shift in color .</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5e72ead59c973a8172d7813e_lighthouse_teaser.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5e72ead59c973a8172d7813e_lighthouse_teaser-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5e72ead59c973a8172d7813e_lighthouse_teaser.png 800w" alt="" class="pub_image"/><div class="number pub_title">Lighthouse: Predicting Lighting Volumes<br/>for Spatially-Coherent Illumination</div><a href="https://pratulsrinivasan.github.io/" target="_blank" class="coming-soon names">Pratul P. Srinivasan*,</a><a href="https://bmild.github.io/" target="_blank" class="coming-soon names">Ben Mildenhall*,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://jonbarron.info/" class="coming-soon names">Jonathan T. Barron,</a><a href="https://research.google/people/RichardTucker/" class="coming-soon names">Richard Tucker,</a><a href="https://www.cs.cornell.edu/~snavely/" target="_blank" class="coming-soon names">Noah Snavely</a><p class="paragraph">CVPR (2020)</p><a href="https://arxiv.org/abs/2003.08367" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://people.eecs.berkeley.edu/~pratul/lighthouse/" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://youtube.com/watch?v=KsiZpUFPqIU" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair. We propose a model that estimates a 3D volumetric RGBA model of a scene, including content outside the observed field of view, and then uses standard volume rendering to estimate the incident illumination at any 3D location within that volume. </p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/60679fb925efbe6832bfd479_turkeyes.jpg" sizes="(max-width: 479px) 25vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/60679fb925efbe6832bfd479_turkeyes-p-500.jpeg 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/60679fb925efbe6832bfd479_turkeyes.jpg 800w" alt="" class="pub_image turkeyes"/><div class="number pub_title"><strong>TurkEyes: A Web-Based Toolbox for Crowdsourcing Attention Data</strong></div><a href="https://anelisenewman.com/#/" target="_blank" class="coming-soon names">Anelise Newman,</a><a href="https://www.barryam3.com/" target="_blank" class="coming-soon names">Barry McNamara,</a><a href="#" class="coming-soon names">Camilo Fosco,</a><a href="#" class="coming-soon names">Yun Bin Zhang,</a><a href="#" class="coming-soon names">Pat Sukham,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="https://www.namwkim.org/" target="_blank" class="coming-soon names">Nam Wook Kim,</a><a href="https://web.mit.edu/zoya/www/" target="_blank" class="coming-soon names">Zoya Bylinskii</a><p class="paragraph">CHI (2020)</p><a href="https://arxiv.org/abs/2001.04461" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="http://turkeyes.mit.edu/" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://github.com/turkeyes" target="_blank" class="coming-soon lenselinks">Code</a><p class="paragraph-2 pub_para">Eye movements provide insight into what parts of an image a viewer finds most salient, interesting, or relevant to the task at hand. Unfortunately, eye tracking data, a commonly-used proxy for attention, is cumbersome to collect. Here we explore an alternative: a comprehensive web-based toolbox for crowdsourcing visual attention.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a37dd53d8f1660001be3f94_fog_car.jpg" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a37dd53d8f1660001be3f94_fog_car-p-500.jpeg 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a37dd53d8f1660001be3f94_fog_car-p-800.jpeg 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a37dd53d8f1660001be3f94_fog_car.jpg 995w" alt="" class="pub_image"/><div class="number pub_title">Towards Photography Through Realistic Fog</div><a href="http://web.media.mit.edu/~guysatat/index.html#home" target="_blank" class="coming-soon names">Guy Satat,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="http://web.media.mit.edu/~raskar/" target="_blank" class="coming-soon names">Ramesh Raskar</a><p class="paragraph">ICCP (2018)</p><a href="http://web.media.mit.edu/~guysatat/fog/" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="http://web.media.mit.edu/~guysatat/fog/materials/TowardsPhotographyThroughRealisticFog.pdf" target="_blank" class="coming-soon lenselinks">Local Copy</a><a href="https://www.youtube.com/watch?time_continue=168&v=CkR1UowJF0w" target="_blank" class="coming-soon lenselinks">Video</a><a href="http://news.mit.edu/2018/depth-sensing-imaging-system-can-peer-through-fog-0321" target="_blank" class="coming-soon">MIT News</a><p class="paragraph-2 pub_para">We demonstrate a technique that recovers reflectance and depth of a scene obstructed by dense, dynamic, and heterogeneous fog. We use a single photon avalanche diode (SPAD) camera filter our the light that scatters off of the fog in the scene.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a1cc3a207d9180001e34d5a_Screen%20Shot%202017-11-27%20at%209.01.42%20PM.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a1cc3a207d9180001e34d5a_Screen%20Shot%202017-11-27%20at%209.01.42%20PM-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a1cc3a207d9180001e34d5a_Screen%20Shot%202017-11-27%20at%209.01.42%20PM-p-800.png 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a1cc3a207d9180001e34d5a_Screen%20Shot%202017-11-27%20at%209.01.42%20PM.png 1000w" alt="" class="pub_image"/><div class="number pub_title">Flash Photography for Data-Driven Hidden Scene Recovery</div><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="http://web.media.mit.edu/~guysatat/index.html#home" target="_blank" class="coming-soon names">Guy Satat,</a><a href="http://web.media.mit.edu/~raskar/" target="_blank" class="coming-soon names">Ramesh Raskar</a><p class="paragraph newline"></p><a href="https://arxiv.org/pdf/1810.11710.pdf" target="_blank" class="coming-soon lenselinks">arXiv</a><a href="https://youtu.be/_FnPRE0FwHA" target="_blank" class="coming-soon lenselinks">Video</a><p class="paragraph-2 pub_para">We introduce a method that couples traditional geometric understanding and data-driven techniques to image around corners with consumer cameras. We show that we can recover information in real scenes despite only training our models on synthetically generated data.<br/></p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a1cc444b5d43a000142a3d8_Screen%20Shot%202017-11-27%20at%209.04.38%20PM.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a1cc444b5d43a000142a3d8_Screen%20Shot%202017-11-27%20at%209.04.38%20PM-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a1cc444b5d43a000142a3d8_Screen%20Shot%202017-11-27%20at%209.04.38%20PM-p-800.png 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a1cc444b5d43a000142a3d8_Screen%20Shot%202017-11-27%20at%209.04.38%20PM.png 890w" alt="" class="pub_image"/><div class="number pub_title">Photography optics at relativistic speeds</div><a href="http://web.media.mit.edu/~barmak/" target="_blank" class="coming-soon names">Barmak Heshmat,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="http://web.media.mit.edu/~guysatat/index.html#home" target="_blank" class="coming-soon names">Guy Satat,</a><a href="http://web.media.mit.edu/~raskar/" target="_blank" class="coming-soon names">Ramesh Raskar</a><p class="paragraph">Nature Photonics 聽(2018)</p><a href="http://web.media.mit.edu/~barmak/Time-folded.html" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="https://www.nature.com/articles/s41566-018-0234-0" target="_blank" class="coming-soon lenselinks">Nature Article</a><a href="https://www.youtube.com/watch?v=XWxYiMQ5tdI" target="_blank" class="coming-soon lenselinks">Video</a><a href="http://news.mit.edu/2018/novel-optics-ultrafast-cameras-create-new-possibilities-imaging-0813" target="_blank" class="coming-soon">MIT News</a><p class="paragraph-2 pub_para">We demonstrate that by folding the optical path in time, one can collapse the conventional photography optics into a compact volume or multiplex various functionalities into a single imaging optics piece without losing spatial or temporal resolution. By using time-folding at different regions of the optical path, we achieve an order of magnitude lens tube compression, ultrafast multi-zoom imaging, and ultrafast multi-spectral imaging. </p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a1cc220b5d43a000142a051_Screen%20Shot%202017-11-27%20at%208.55.15%20PM.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a1cc220b5d43a000142a051_Screen%20Shot%202017-11-27%20at%208.55.15%20PM-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/5a1cc220b5d43a000142a051_Screen%20Shot%202017-11-27%20at%208.55.15%20PM.png 614w" alt="" class="pub_image"/><div class="number pub_title">Synthetically Trained Icon Proposals for Parsing and Summarizing Infographics<br/></div><a href="http://people.csail.mit.edu/smadan/web/" target="_blank" class="coming-soon names">Spandan Madan*,</a><a href="http://web.mit.edu/zoya/www/" target="_blank" class="coming-soon names">Zoya Bylinskii*,</a><a href="#" class="coming-soon names my_name">Matthew Tancik*,</a><a href="http://people.csail.mit.edu/recasens/" target="_blank" class="coming-soon names">Adria Recasens,</a><a href="http://kimberli.me/" target="_blank" class="coming-soon names">Kim Zhong,</a><a href="https://www.linkedin.com/in/samialsheikh" target="_blank" class="coming-soon names">Sami Alsheikh,</a><a href="https://www.seas.harvard.edu/directory/pfister" target="_blank" class="coming-soon names">Hanspeter Pfister,</a><a href="http://cvcl.mit.edu/" target="_blank" class="coming-soon names">Aude Olivia,</a><a href="http://people.csail.mit.edu/fredo/" target="_blank" class="coming-soon names">Fredo Durand</a><p class="paragraph newline"></p><a href="https://arxiv.org/abs/1807.10441" class="coming-soon lenselinks">arXiv</a><a href="http://visdata.mit.edu/" target="_blank" class="coming-soon lenselinks">Visually29K</a><p class="paragraph-2 pub_para">Combining icon classification and text extraction, we present a multi-modal summarization application. Our application takes an infographic as input and automatically produces text tags and visual hashtags that are textually and visually representative of the infographic鈥檚 topics respectively.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/580b858ad2d6f3fd57668019_setup_view-02.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/580b858ad2d6f3fd57668019_setup_view-02-p-500x231.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/580b858ad2d6f3fd57668019_setup_view-02-p-800x370.png 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/580b858ad2d6f3fd57668019_setup_view-02-p-1080x499.png 1080w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/580b858ad2d6f3fd57668019_setup_view-02-p-1600x739.png 1600w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/580b858ad2d6f3fd57668019_setup_view-02-p-2000x924.png 2000w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/580b858ad2d6f3fd57668019_setup_view-02.png 2435w" alt="" class="pub_image"/><div class="number pub_title">Lensless Imaging with Compressive Ultrafast Sensing</div><a href="http://web.media.mit.edu/~guysatat/index.html#home" target="_blank" class="coming-soon names">Guy Satat,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="http://web.media.mit.edu/~raskar/" target="_blank" class="coming-soon names">Ramesh Raskar</a><p class="paragraph">IEEE Transactions on Computational Imaging聽(2017)</p><a href="http://web.media.mit.edu/~guysatat/singlepixel/" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="http://web.media.mit.edu/~guysatat/singlepixel/LenslessImagingwithCompressiveUltrafastSensing.pdf" target="_blank" class="coming-soon lenselinks">Local Copy</a><a href="http://ieeexplore.ieee.org/document/7882664/" target="_blank" class="coming-soon lenselinks">IEEE</a><a href="http://news.mit.edu/2017/faster-single-pixel-camera-lensless-imaging-0330" target="_blank" class="coming-soon">MIT News</a><p class="paragraph-2 pub_para">We demonstrate a new imaging method that is lensless and requires only a single pixel. Compared to previous single pixel cameras our system allows significantly faster and more efficient acquisition by using ultrafast time-resolved measurement with聽compressive sensing.</p></div><div class="row_pub w-clearfix"><img src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/59654c9d64b8261a1179fa18_fig1-01.png" sizes="(max-width: 479px) 22vw, (max-width: 767px) 23vw, (max-width: 991px) 214.1953125px, 277.796875px" srcset="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/59654c9d64b8261a1179fa18_fig1-01-p-500.png 500w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/59654c9d64b8261a1179fa18_fig1-01-p-800.png 800w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/59654c9d64b8261a1179fa18_fig1-01-p-1080.png 1080w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/59654c9d64b8261a1179fa18_fig1-01-p-1600.png 1600w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/59654c9d64b8261a1179fa18_fig1-01-p-2000.png 2000w, https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/59654c9d64b8261a1179fa18_fig1-01.png 2373w" alt="" class="pub_image"/><div class="number pub_title">Object Classification through Scattering Media with Deep Learning on Time Resolved Measurement</div><a href="http://web.media.mit.edu/~guysatat/index.html#home" target="_blank" class="coming-soon names">Guy Satat,</a><a href="#" class="coming-soon names my_name">Matthew Tancik,</a><a href="http://otkrist.github.io/" target="_blank" class="coming-soon names">Otkrist Gupta,</a><a href="http://web.media.mit.edu/~barmak/" target="_blank" class="coming-soon names">Barmak Heshmat,</a><a href="http://web.media.mit.edu/~raskar/" target="_blank" class="coming-soon names">Ramesh Raskar</a><p class="paragraph">Optics Express聽 (2017)</p><a href="http://web.media.mit.edu/~guysatat/calib_inv/" target="_blank" class="coming-soon lenselinks">Project Website</a><a href="http://web.media.mit.edu/~guysatat/calib_inv/ObjectClassificationthroughScatteringMediawithDeepLearningonTimeResolvedMeasurement_local.pdf" target="_blank" class="coming-soon lenselinks">Local Copy</a><a href="https://www.osapublishing.org/oe/abstract.cfm?uri=oe-25-15-17466" target="_blank" class="coming-soon lenselinks">OSA</a><p class="paragraph-2 pub_para">A deep learning method for object classification through scattering media. Our method trains on synthetic data with variations in calibration parameters that allows the network to learn a calibration invariant model.</p></div></div><a href="https://goldberg.berkeley.edu/" target="_blank" class="coming-soon names">Ken Goldberg,</a><script src="https://d3e54v103j8qbb.cloudfront.net/js/jquery-3.5.1.min.dc5e7f18c8.js?site=51e0d73d83d06baa7a00000f" type="text/javascript" integrity="sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0=" crossorigin="anonymous"></script><script src="https://cdn.prod.website-files.com/51e0d73d83d06baa7a00000f/js/webflow.aa44b82f23e50f148f5564e13f242260.js" type="text/javascript"></script><script> function setColor() { var hue_1 = Math.floor(Math.random()*360); var hue_2 = Math.floor(Math.random()*360); document.querySelector('.logo_svg').style.filter = "hue-rotate("+hue_1+"deg) saturate(65%)"; document.querySelector('.logo_over').style.filter = "hue-rotate("+hue_2+"deg) saturate(65%)"; } window.addEventListener("load", setColor); </script></body></html>