CINXE.COM

Panos Achlioptas

<!DOCTYPE html> <html lang="en"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-FHZW2W76WM"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-FHZW2W76WM'); </script> <title>Panos Achlioptas</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta name="author" content="owwwlab.com"> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> <meta name="description" content="Personal webpage" /> <meta name="keywords" content="panos, achlioptas, 伪蠂位喂慰蟺蟿伪蟼, 伪蠂位喂慰蟺蟿伪蟼, 蟺伪谓伪纬喂蠅蟿畏蟼, 蟺伪谓慰蟼, stanford, machine learning, deep learning, graphics, art, language, emotions" /> <link rel="shortcut icon" href="img/favicons/evo_team.png"> <!--CSS styles--> <link rel="stylesheet" href="css/bootstrap.css"> <link rel="stylesheet" href="css/font-awesome.min.css"> <link rel="stylesheet" href="css/perfect-scrollbar-0.4.5.min.css"> <link rel="stylesheet" href="css/magnific-popup.css"> <link rel="stylesheet" href="css/style.css"> <link id="theme-style" rel="stylesheet" href="css/styles/default.css"> <!--/CSS styles--> <!--Javascript files--> <script type="text/javascript" src="js/jquery-1.10.2.js"></script> <script type="text/javascript" src="js/TweenMax.min.js"></script> <script type="text/javascript" src="js/jquery.touchSwipe.min.js"></script> <script type="text/javascript" src="js/jquery.carouFredSel-6.2.1-packed.js"></script> <script type="text/javascript" src="js/modernizr.custom.63321.js"></script> <script type="text/javascript" src="js/jquery.dropdownit.js"></script> <script type="text/javascript" src="js/jquery.stellar.min.js"></script> <script type="text/javascript" src="js/ScrollToPlugin.min.js"></script> <script type="text/javascript" src="js/bootstrap.min.js"></script> <script type="text/javascript" src="js/jquery.mixitup.min.js"></script> <script type="text/javascript" src="js/masonry.min.js"></script> <script type="text/javascript" src="js/perfect-scrollbar-0.4.5.with-mousewheel.min.js"></script> <script type="text/javascript" src="js/magnific-popup.js"></script> <script type="text/javascript" src="js/custom.js"></script> <!--/Javascript files--> </head> <body> <div id="wrapper"> <a href="#sidebar" class="mobilemenu"><i class="icon-reorder"></i></a> <div id="sidebar"> <div id="main-nav"> <div id="nav-container"> <div id="profile" class="clearfix"> <div class="portrate hidden-xs"></div> <div class="title"> <h2>Panos Achlioptas</h2> <h3> <!-- <b><a href="https://www.steelperlot.com/">Steel Perlot</a></b> --> <b>Stealth Mode</b> </h3> <h3>Ex:<br> <a href="https://cs.stanford.edu/">Stanford University</a> <br> <a href="https://is.mpg.de/">Max Planck Inst.</a> <br> <a href="https://stemcellgenomics.ucsc.edu/">UCSC</a>, <a href="https://www.csd.uoc.gr/CSD/index.jsp?lang=en">UoC.</a> <br> <a href="https://research.snap.com/team/category/all.html">Snap Inc.</a> <br> <a href="https://www.autodeskresearch.com/about">AutoDesk</a>, <a href="https://ai.facebook.com/">Meta</a> <br> </h3> </div> </div> <ul id="navigation"> <li class="currentmenu"> <a href="index.html"> <div class="icon icon-user"></div> <div class="text">About</div> </a> </li> <li> <a href="publication.html"> <div class="icon icon-edit"></div> <div class="text">Publications</div> </a> </li> <!-- <li> <a href="projects.html"> <div class="icon icon-book"></div> <div class="text">Projects</div> </a> </li> --> <li> <a href="academic_activity.html"> <div class="icon icon-time"></div> <div class="text">Academic Activities</div> </a> </li> <!-- <li> <a href="gallery.html"> <div class="icon icon-picture"></div> <div class="text">Gallery</div> </a> </li> --> <li> <a href="contact.html"> <div class="icon icon-calendar"></div> <div class="text">Contact Me</div> </a> </li> <!-- <li> <a href="cv.pdf"> <div class="icon icon-download-alt"></div> <div class="text">Download CV</div> </a> </li> --> </ul> </div> </div> <div class="social-icons"> <ul> <li> <a href="https://www.linkedin.com/in/panos-achlioptas-809b67249"><i class="icon-linkedin"></i></a> </li> <li> <a href = "mailto: pachlioptas@gmail.com" target="_blank"><i class="icon-envelope"></i></a> </li> <li> <a href="https://www.github.com/optas"> <i class="icon-github"> </i> </a> </li> </ul> </div> </div> <div id="main"> <div id="biography" class="page home" data-pos="home"> <div class="pageheader"> <div class="headercontent"> <div class="section-container"> <div class="row"> <div class="col-sm-2 visible-sm"></div> <div class="col-sm-8 col-md-4"> <div class="biothumb"> <img alt="image" src="img/personal/personal-big.png" class="img-responsive"> <div class="overlay"> <h2 class="">Panos Achlioptas</h2> <ul class="list-unstyled"> <!-- <li>Creative Vision</li> --> <!-- <li>Stealth Mode</li> --> <li><b>pachlioptas@gmail.com</b></li> </ul> </div> </div> </div> <div class="clearfix visible-sm visible-xs"></div> <div class="col-sm-12 col-md-7"> <h3 class="title">Short Bio</h3> <p> <i>Currently, I work in stealth mode.</i> <!-- <i>Currently, I am building a team to explore and build new products for one of <a href="https://www.steelperlot.com/"> Steel Perlot</a> Technology's key --> <!-- target areas, working under the direct guidance of <a href="https://www.steelperlot.com/blog1/"> Michelle Ritter</a> and <a href="https://en.wikipedia.org/wiki/Eric_Schmidt"> Eric Schmidt</a>, and, I am --> <!-- hiring! --> <!-- </i> --> </p> <p> My passion lies in exploring the boundaries of AI models with human-like visual & linguistic (and/or emotional) capabilities.</p> <p> My <a href="publication.html"> area of focus</a> is on designing deep learning models for multi-modal data. Specifically, my typical methods use natural language to emphasize semantic differences or the emotional characteristics of depicted objects in visual data. </p> <p>I recently spent a great year being a Research Scientist working for the <a href="https://research.snap.com/team/category/creative-vision/">Creative Vision</a> team of <a href=https://research.snap.com/>Snap Research</a>. </p> <p>I received my Ph.D. degree from the Department of <a href="http://www-cs.stanford.edu/">Computer Science Department</a> of <a href="http://www.stanford.edu/">Stanford University</a> for my work done with the <a href="http://geometry.stanford.edu/">Geometric Computing Lab</a> under the supervision of <a href="https://geometry.stanford.edu/member/guibas/">Leo Guibas</a>. </p> <p>In the past, I interned at the <a href="https://research.fb.com/category/facebook-ai-research/"> Facebook AI Research Lab </a> in Menlo Park and, before that, at <a href="https://www.autodeskresearch.com/about"> Autodesk Research </a> in San Francisco. A few years back, I was a research assistant in the <a href="https://hausslergenomics.ucsc.edu"> Haussler Lab </a> at <a href="https://www.ucsc.edu">UCSC</a> and an Erasmus scholar at the <a href=https://is.tuebingen.mpg.de> Max Planck Institute for Intelligent Systems </a>in T眉ebingen, DE.</p> <!-- <p> Previously, I did my Master's Thesis at <a href="http://www.stanford.edu/">Stanford</a> and my Diploma Thesis with the <a href="http://spl.edu.gr">Signal Processing Laboratory </a> affiliated with the <a href="http://www.csd.uoc.gr">Computer Science Department</a> of the <a href="http://www.en.uoc.gr">University of Crete</a>.</p> --> <!-- <p style='text-align: right; font-size: 13pt; padding-top: 10px; font-family:Lucida Sans Typewriter'> <a href='https://en.wikipedia.org/wiki/Philotimo'>桅喂位蠈蟿喂渭慰</a>, <a href='https://greekerthanthegreeks.com/2015/03/lost-in-translation-word-of-day-meraki.html'> 螠蔚蟻维魏喂</a>, <a href='https://en.wikipedia.org/wiki/Color_wheel_theory_of_love#Agape'>螒纬维蟺畏.</a> </p> --> </div> </div> </div> </div> </div> <div class="container" id ="Outreach"> <div class="row"> <div class="col-lg-12 col-md-12 col-sm-12 col-sx-12"> <h2 class="section-heading text-center">Outreach</h2> <hr class="primary" /> &nbsp; <p style="margin-bottom:5px">On weekends, depending on availability, I voluntarily host <strong>office hours for students</strong> (especially underrepresented groups and junior students) who want to get into or dive deeper into the fields of Machine Learning, Computer Vision, or NLP. Each slot is 20-minutes long. If you want to schedule a meeting, please fill out this <a href="https://forms.gle/BVUSYn7bXf8HnkuJ6"> questionnaire</a>.</p> </div> </div> </div> <div class="container" id ="news"> <div class="row"> <div class="col-lg-12 col-md-12 col-sm-12 col-sx-12"> <h2 class="section-heading text-center">Latest News</h2> <hr class="primary" /> &nbsp; <p style="font-size:11pt;margin-bottom:5px"><b>June 2023:</b> I have two first-author papers appearing in <a href="https://cvpr2023.thecvf.com/">CVPR-2023</a>, Vancouver. <a href="https://affective-explanations.org/"> Affection</a> and <a href="https://changeit3d.github.io/"> ShapeTalk</a>. A big shout-out to all my co-authors who have made these works a reality shared at the top-tier level of our research community. </p> <p style="font-size:11pt;margin-bottom:5px"><b>April 2023:</b> Our third workshop at the intersection of <i>3D scenes and natural language</i> for common objects (<a href="https://languagefor3dscenes.github.io/ICCV2023/">L3DS</a>) will be part of the <a href="https://iccv2023.thecvf.com/"> ICCV-2023</a> workshop series. </p> <p style="font-size:11pt;margin-bottom:5px"><b>December 2022:</b> My first last-author paper is now a reality (<a href="https://scanents3d.github.io/">ScanEnts3D</a>). In this work, we exploit dense correspondences between 3D objects in scenes and referential language and improve the SoTA in two essential multi-modal tasks in 3D scenes. Big shout-out to my amazing intern <a href="https://cemse.kaust.edu.sa/people/person/ahmed-s-abdelreheem">Ahmed Abdelreheem</a> for all his hard work. </p> <p style="font-size:11pt;margin-bottom:5px"><b>November 2022:</b> <a href="https://arxiv.org/abs/2212.05011">LADIS</a>, a rigorous framework for improving shape editing models of 3D objects via language, will appear in the findings of <a href="https://2022.emnlp.org/"> EMNLP-2022</a>. </p> <p style="font-size:11pt;margin-bottom:5px"><b>October 2022:</b> A big milestone in my "quest" of developing <b>more emotionally aware, and human-centric AI</b> is achieved. <a href="https://affective-explanations.org/">Affection</a> is now on <a href="https://arxiv.org/abs/2210.01946"> arXiv</a>. </p> <p style="font-size:11pt;margin-bottom:5px"><b>April 2022:</b> <a href="https://arxiv.org/abs/2204.00604">Dance2Music-GAN</a>, a practical adversarial multi-modal framework that generates complex musical samples conditioned on dance videos, will appear at <a href="https://eccv2022.ecva.net/"> ECCV-2022</a>. </p> <p style="font-size:11pt;margin-bottom:5px"><b>March 2022:</b> Our second workshop at the intersection of <strong>3D scenes and natural language</strong> for common objects (<a href="https://languagefor3dscenes.github.io/ECCV2022/">L3DS</a>), will be part of the <a href="https://eccv2022.ecva.net/program/"> ECCV-2022</a> workshop series. We are looking forward to a happy reunion and passionate, productive discussions. </p> <p style="font-size:11pt;margin-bottom:5px"><b>March 2022:</b> <a href="https://formyfamily.github.io/NeROIC/">NeROIC</a>, a novel <strong>object acquisition framework</strong> which exploits and extends radiance fields to capture high-quality 3D objects from online <strong>image</strong> collections, will appear at <a href="https://s2022.siggraph.org/"> SIGGRAPH-2022</a>. </p> <p style="font-size:11pt;margin-bottom:5px"><b>March 2022:</b> <a href="https://arxiv.org/abs/2112.06390">PartGlot</a>, which opens the door for the automatic recognition and localization of shape-parts via referential language <strong>alone</strong>, will appear with an oral presentation in <a href="https://cvpr2022.thecvf.com/">CVPR-2022</a>. </p> <p style="font-size:11pt;margin-bottom:5px"><b>November 2021:</b> Gave a talk describing recent trends on Affective Deep Learning at <b>Stanford's STATS 281</b> <i>Statistical Analysis of Fine Art</i>. </p> <p style="font-size:11pt;margin-bottom:5px"><b>October 2021:</b> I am excited to start my new role as a Research Scientist for the <a href="https://research.snap.com/team/category/creative-vision/"> Creative Vision</a> of SNAP Research. </p> <p style="font-size:11pt;margin-bottom:5px"><b>May 2021:</b> The content of our CVPR-21 workshop <a href="https://language3dscenes.github.io/">L3DS</a> concerning language and 3D scenes has been finalized! Among others, we will host a benchmark challenge for <b>ReferIt3D</b>: (<a href="https://referit3d.github.io/benchmarks.html">here</a>). </p> <p style="font-size:11pt;margin-bottom:5px"><b>March 2021:</b> <a href="https://github.com/optas/artemis">ArtEmis</a> keeps growing. Now it is featured in <a href="https://www.forbes.com/sites/evaamsen/2021/03/30/artificial-intelligence-is-learning-to-categorize-and-talk-about-art/?sh=21d5d4fc37ac&utm_source=TWITTER&utm_medium=social&utm_content=4681414173&utm_campaign=sprinklrForbesScience&fbclid=IwAR1Q42fcjUIQK_qe9IgECgE7mXR_VTfzH77vnMWwp3Dwy8meXN5zy4Wt9T8">Forbes Science</a>. </p> <p style="font-size:11pt;margin-bottom:5px"><b>March 2021:</b> I successfully defended my <b>Ph.D. Thesis</b> titled <i>"Learning to Generate and Differentiate 3D Objects Using Geometry & Language"</i>. </p> <p style="font-size:11pt;margin-bottom:5px"><b>March 2021:</b> I will give a lightning talk on <i>"Art and AI"</i> during HAI鈥檚 <a href="https://hai.stanford.edu/2021-spring-conference-agenda">Intelligence Augmentation: AI Empowering People to Solve Global Challenges.</a> </p> <p style="font-size:11pt;margin-bottom:5px"><b>March 2021: </b>Our work <a href="http://www.artemisdataset.org">ArtEmis</a>: <i> Affective Language for Visual Art</i> is provisionally accepted as an <i>Oral</i> presentation in <a href="http://cvpr2021.thecvf.com">CVPR-2021</a>.</p> <p style="font-size:11pt;margin-bottom:5px"><b>February 2021:</b> Our recent arXiv report (<a href="https://arxiv.org/abs/2101.07396">ArtEmis</a>) attracted some media attention: <a href="https://www.newscientist.com/article/2266240-ai-art-critic-can-predict-which-emotions-a-painting-will-evoke/">New Scientist</a>, <a href="https://hai.stanford.edu/news/artists-intent-ai-recognizes-emotions-visual-art"> HAI</a>, <a href="https://www.marktechpost.com/2021/01/30/stanford-researchers-introduces-artemis-a-dataset-containing-439k-emotion-attributions/">MarkTechPost</a>, <a href=https://www.radio.com/kcbsradio> KCBS-Radio</a> (want to hear me talk about it? check the short interview below): <div style="text-align: center"><audio controls> <source src="data/interviews/artemis/kcbs/SAT-AI-ART_2_2-6-21(disco_mix).mp3" type="audio/mpeg"></audio> </div> </p> <p style="font-size:11pt;margin-bottom:5px"><b>February 2021: </b> I will co-organize the 1st Workshop on <i>Language for 3D Scenes</i> in CVPR 2021. We hope to spark new interest in this emerging area!</p> <!-- (<a href="https://language3dscenes.github.io/">page-coming-soon</a>) --> <p style="font-size:11pt;margin-bottom:5px"><b>February 2021:</b> I am initiating this "News" section. My intention is to give the <b>gist</b> of my (primarily) professional updates to visitors.</p> <!-- <p id="ShowLink_news"><a title="Show more News" href="#" onclick="var elements = document.getElementsByClassName('hidden_news');for(var i=0; i<elements.length; i++) {elements[i].style.display='block';}document.getElementById('ShowLink_news').style.display='none';document.getElementById('HideLink_news').style.display='block';return false;">Show more</a></p> <p id="HideLink_news" style="display:none"><a title="Show less News" href="#" onclick="var elements = document.getElementsByClassName('hidden_news');for(var i=0; i<elements.length; i++) {elements[i].style.display='none';}document.getElementById('HideLink_news').style.display='none';document.getElementById('ShowLink_news').style.display='block';return false;">Show less</a></p> --> </div> </div> </div> </div> </div> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10