CINXE.COM

CoMMA Lab @ Purdue

<!DOCTYPE html> <html lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <title> CoMMA Lab @ Purdue </title> <meta name="author" content=" "> <meta name="description" content="The Computational Robot Motion and Autonomy Lab at Purdue University "> <meta name="keywords" content="jekyll, jekyll-theme, academic-website, portfolio-website"> <link rel="stylesheet" href="/assets/css/bootstrap.min.css?a4b3f509e79c54a512b890d73235ef04"> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/mdbootstrap@4.20.0/css/mdb.min.css" integrity="sha256-jpjYvU3G3N6nrrBwXJoVEYI/0zw8htfFnhT9ljN3JJw=" crossorigin="anonymous"> <link defer rel="stylesheet" href="/assets/css/academicons.min.css?f0b7046b84e425c55f3463ac249818f5"> <link defer rel="stylesheet" type="text/css" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700|Roboto+Slab:100,300,400,500,700|Material+Icons&amp;display=swap"> <link defer rel="stylesheet" href="/assets/css/jekyll-pygments-themes-github.css?591dab5a4e56573bf4ef7fd332894c99" media="" id="highlight_theme_light"> <link rel="shortcut icon" href="data:image/svg+xml,&lt;svg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20100%20100%22&gt;&lt;text%20y=%22.9em%22%20font-size=%2290%22&gt;%F0%9F%A6%BE&lt;/text&gt;&lt;/svg&gt;"> <link rel="stylesheet" href="/assets/css/main.css?d41d8cd98f00b204e9800998ecf8427e"> <link rel="canonical" href="https://commalab.github.io/"> <script src="/assets/js/theme.js?ca131c86afeddc68f0e9d3278afbc9b8"></script> <link defer rel="stylesheet" href="/assets/css/jekyll-pygments-themes-native.css?5847e5ed4a4568527aa6cfab446049ca" media="none" id="highlight_theme_dark"> <script>initTheme();</script> </head> <body class="fixed-top-nav sticky-bottom-footer"> <header> <nav id="navbar" class="navbar navbar-light navbar-expand-sm fixed-top" role="navigation"> <div class="container"> <button class="navbar-toggler collapsed ml-auto" type="button" data-toggle="collapse" data-target="#navbarNav" aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar top-bar"></span> <span class="icon-bar middle-bar"></span> <span class="icon-bar bottom-bar"></span> </button> <div class="collapse navbar-collapse text-right" id="navbarNav"> <ul class="navbar-nav ml-auto flex-nowrap"> <li class="nav-item active"> <a class="nav-link" href="/">About <span class="sr-only">(current)</span> </a> </li> <li class="nav-item "> <a class="nav-link" href="/members">Team </a> </li> <li class="nav-item "> <a class="nav-link" href="/publications/">Publications </a> </li> <li class="nav-item "> <a class="nav-link" href="/projects/">Research </a> </li> <li class="nav-item "> <a class="nav-link" href="/repositories/">Code </a> </li> <li class="nav-item "> <a class="nav-link" href="/classes/">Teaching </a> </li> <li class="nav-item "> <a class="nav-link" href="/join">Join </a> </li> <li class="toggle-container"> <button id="light-toggle" title="Change theme"> <i class="ti ti-moon-filled" id="light-toggle-dark"></i> <i class="ti ti-sun-filled" id="light-toggle-light"></i> </button> </li> </ul> </div> </div> </nav> <progress id="progress" value="0"> <div class="progress-container"> <span class="progress-bar"></span> </div> </progress> </header> <div class="container mt-5" role="main"> <div class="post"> <header class="post-header"> <h1 class="post-title"> CoMMA Lab @ Purdue </h1> <p class="desc"></p> </header> <article> <div class="clearfix"> <p> </p> <h5> Welcome to the <b class="highlight">Co</b>mputational <b class="highlight">M</b>otion, <b class="highlight">M</b>anipulation, and <b class="highlight">A</b>utonomy (CoMMA) lab at <a href="https://www.purdue.edu/" rel="external nofollow noopener" target="_blank">Purdue</a>! </h5> <p><br></p> <h2 id="about-us">About Us</h2> <p>Our research broadly encompasses algorithms, methods, and software for complex robots or autonomous systems to achieve complicated tasks in the real world, focusing on how robots make decisions about what actions to do, in what sequence to do those actions, and how to move in the world to accomplish those actions. We are interested in techniques that generalize and apply to any robotic system, constraint, or environment and are fast, efficient, and easy to use within a broader system鈥攚e want our approaches to apply to robots that work in factories, the home, hospitals, and even space. We are also interested in the intersection between the theory and practice of robotics algorithms, finding where software engineering, hardware acceleration, and intelligent algorithm design can synergize to create a whole greater than the sum of its parts.</p> <hr> <h2>Research Areas</h2> <p> </p> <div class="row"> <div class="col-sm-4 col-md-4 tight-col"> <div class="card hoverable"> <div class="card-body"> <a href="/projects/constraints/"> <h5 class="card-title">Planning with Constraints</h5> <figure> <video src="/assets/video/constraints.webm" class="img-fluid z-depth-1" width="100%" height="auto" autoplay="" loop="" muted=""></video> </figure> </a> </div> </div> </div> <div class="col-sm-4 col-md-4 tight-col"> <div class="card hoverable"> <div class="card-body"> <a href="/projects/long_horizon/"> <h5 class="card-title">Long-Horizon Planning</h5> <figure> <video src="/assets/video/tamp.webm" class="img-fluid z-depth-1" width="100%" height="auto" autoplay="" loop="" muted=""></video> </figure> </a> </div> </div> </div> <div class="col-sm-4 col-md-4 tight-col"> <div class="card hoverable"> <div class="card-body"> <a href="/projects/realtime_performance/"> <h5 class="card-title">Real-time Performance</h5> <figure> <video src="/assets/video/fast_planning.webm" class="img-fluid z-depth-1" width="100%" height="auto" autoplay="" loop="" muted=""></video> </figure> </a> </div> </div> </div> <div class="col-sm-4 col-md-4 tight-col"> <div class="card hoverable"> <div class="card-body"> <a href="/projects/human_robot/"> <h5 class="card-title">Human-Robot Collaboration</h5> </a> </div> </div> </div> <div class="col-sm-4 col-md-4 tight-col"> <div class="card hoverable"> <div class="card-body"> <a href="/projects/implicit/"> <h5 class="card-title">Implicit and Learned Models</h5> </a> </div> </div> </div> <div class="col-sm-4 col-md-4 tight-col"> <div class="card hoverable"> <div class="card-body"> <a href="/projects/software/"> <h5 class="card-title">Robotics Software</h5> </a> </div> </div> </div> </div> <p><br></p> </div> <hr> <h2> <a href="/news/" style="color: inherit">News</a> </h2> <div class="news"> <div class="table-responsive" style="max-height: 60vw"> <table class="table table-sm table-borderless"> <tr> <th scope="row" style="width: 20%">Mar 13, 2025</th> <td> <a href="https://www.linkedin.com/posts/brianplancher_robotics-computerarchitecture-computersystems-activity-7305940261052796928-zhnf?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAEc5N6ABK7LXY73p6E57GdRk_3yLdOp75kk" class="card-link" target="_blank" rel="external nofollow noopener"><i class="fab fa-linkedin"></i></a>聽 The <a href="https://sites.google.com/view/roboarch-icra25" rel="external nofollow noopener" target="_blank">RoboARCH</a> workshop at <a href="https://2025.ieee-icra.org/" rel="external nofollow noopener" target="_blank">ICRA 2025</a> welcomes <a href="https://easychair.org/conferences/?conf=roboarchicra25" rel="external nofollow noopener" target="_blank">2-page abstract submissions</a> on accelerating robotics applications with advanced hardware or software engineering! </td> </tr> <tr> <th scope="row" style="width: 20%">Feb 07, 2025</th> <td> <a href="https://www.linkedin.com/posts/zachary-kingston-79421b294_physics-grounded-differentiable-simulation-activity-7293037235493380097-eoCq?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAEc5N6ABK7LXY73p6E57GdRk_3yLdOp75kk" class="card-link" target="_blank" rel="external nofollow noopener"><i class="fab fa-linkedin"></i></a>聽 In collaboration with the <a href="https://purdueraadlab.wixsite.com/website-1" rel="external nofollow noopener" target="_blank">RAAD Lab at Purdue</a>, <a href="/members/lucas">Lucas Chen</a> and <a href="/members/yitian">Yitian Gao</a> will present their paper on <a href="publications#chengao2025diffsim">a differentiable simulator for vine robots</a> at <a href="https://robosoft2025.org/" rel="external nofollow noopener" target="_blank">RoboSoft 2025</a>! </td> </tr> <tr> <th scope="row" style="width: 20%">Jan 28, 2025</th> <td> <a href="https://www.linkedin.com/posts/zachary-kingston-79421b294_happy-to-announce-three-papers-have-been-activity-7289764086815375360-jbOW?utm_source=share&amp;utm_medium=member_desktop" class="card-link" target="_blank" rel="external nofollow noopener"><i class="fab fa-linkedin"></i></a>聽 <a class="news-title" href="/news/250128_icra/">The CoMMA Lab will be presenting three papers at ICRA 2025!</a> </td> </tr> <tr> <th scope="row" style="width: 20%">Dec 08, 2024</th> <td> In collaboration with <a href="https://rdl.cecs.anu.edu.au/" rel="external nofollow noopener" target="_blank">Yuanchu Liang, Edward Kim, and Hanna Kurniawati from ANU</a>, a new work on <a href="publications#liang2024ropras">POMDP solving accelerated with VAMP</a> will be presented at <a href="https://isrr2024.su.domains/" rel="external nofollow noopener" target="_blank">ISRR 2024</a>. </td> </tr> <tr> <th scope="row" style="width: 20%">Sep 23, 2024</th> <td> <a href="https://www.linkedin.com/in/qingxi-meng-0b733a125/" rel="external nofollow noopener" target="_blank">Qingxi Meng</a> will present an abstract on <a href="publications#meng2024icra40">perception-aware planning for robotics</a> at <a href="https://icra40.ieee.org/" rel="external nofollow noopener" target="_blank">ICRA@40</a>! </td> </tr> </table> </div> </div> <hr> <h2> <a href="/publications/" style="color: inherit">Selected Publications</a> </h2> <div class="publications"> <ol class="bibliography"> <li> <div class="row"> <a name="chengao2025diffsim" class="anchor"></a> <div class="col col-sm-2 abbr"> <abbr class="badge rounded w-100">RoboSoft</abbr> <figure> <picture> <source class="responsive-img-srcset" srcset="/assets/img/publication_preview/robosoft25-480.webp 480w,/assets/img/publication_preview/robosoft25-800.webp 800w,/assets/img/publication_preview/robosoft25-1400.webp 1400w," sizes="200px" type="image/webp"> <img src="/assets/img/publication_preview/robosoft25.jpg" class="preview z-depth-1 rounded" width="100%" height="auto" alt="robosoft25.jpg" data-zoomable loading="eager" onerror="this.onerror=null; $('.responsive-img-srcset').remove();"> </source></picture> </figure> </div> <div class="col-sm-10"> <div class="title">Physics-Grounded Differentiable Simulation for Soft Growing Robots</div> <div class="author"> <a href="/members/lucas">Lucas聽Chen<sup>*</sup></a>, <a href="/members/yitian">Yitian聽Gao<sup>*</sup></a>, Sicheng聽Wang, Francesco聽Fuentes, <a href="https://engineering.purdue.edu/ME/People/ptProfile?resource_id=241064" class="nonmember" rel="external nofollow noopener" target="_blank">Laura H.聽Blumenschein</a>, and聽<a href="/members/kingston">Zachary聽Kingston</a> </div> <div class="periodical"> <em>In IEEE-RAS International Conference on Soft Robotics</em> </div> <div class="periodical"> To Appear </div> <div class="links"> <a class="abstract btn btn-sm z-depth-0" role="button">Abs</a> <a class="bibtex btn btn-sm z-depth-0" role="button">Bib</a> <a href="https://arxiv.org/abs/2501.17963" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">PDF</a> <a href="https://github.com/CoMMALab/DiffVineSimPy" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Code</a> </div> <div class="abstract hidden"> <p> Soft-growing robots (i.e., vine robots) are a promising class of soft robots that allow for navigation and growth in tightly confined environments. However, these robots remain challenging to model and control due to the complex interplay of the inflated structure and inextensible materials, which leads to obstacles for autonomous operation and design optimization. Although there exist simulators for these systems that have achieved qualitative and quantitative success in matching high-level behavior, they still often fail to capture realistic vine robot shapes using simplified parameter models and have difficulties in high-throughput simulation necessary for planning and parameter optimization. We propose a differentiable simulator for these systems, enabling the use of the simulator "in-the-loop" of gradient-based optimization approaches to address the issues listed above. With the more complex parameter fitting made possible by this approach, we experimentally validate and integrate a closed-form nonlinear stiffness model for thin-walled inflated tubes based on a first-principles approach to local material wrinkling. Our simulator also takes advantage of data-parallel operations by leveraging existing differentiable computation frameworks, allowing multiple simultaneous rollouts. We demonstrate the feasibility of using a physics-grounded nonlinear stiffness model within our simulator, and how it can be an effective tool in sim-to-real transfer. We provide our implementation open source.</p> </div> <div class="bibtex hidden"> <figure class="highlight"><pre><code class="language-bibtex" data-lang="bibtex"><span class="nc">@inproceedings</span><span class="p">{</span><span class="nl">chengao2025diffsim</span><span class="p">,</span> <span class="na">title</span> <span class="p">=</span> <span class="s">{Physics-Grounded Differentiable Simulation for Soft Growing Robots}</span><span class="p">,</span> <span class="na">author</span> <span class="p">=</span> <span class="s">{Chen, Lucas and Gao, Yitian and Wang, Sicheng and Fuentes, Francesco and Blumenschein, Laura H. and Kingston, Zachary}</span><span class="p">,</span> <span class="na">year</span> <span class="p">=</span> <span class="s">{2025}</span><span class="p">,</span> <span class="na">booktitle</span> <span class="p">=</span> <span class="s">{IEEE-RAS International Conference on Soft Robotics}</span><span class="p">,</span> <span class="na">eprint</span> <span class="p">=</span> <span class="s">{2501.17963}</span><span class="p">,</span> <span class="na">archiveprefix</span> <span class="p">=</span> <span class="s">{arXiv}</span><span class="p">,</span> <span class="na">primaryclass</span> <span class="p">=</span> <span class="s">{cs.RO}</span><span class="p">,</span> <span class="na">note</span> <span class="p">=</span> <span class="s">{To Appear}</span><span class="p">,</span> <span class="p">}</span></code></pre></figure> </div> </div> </div> </li> <li> <div class="row"> <a name="ramsey2024" class="anchor"></a> <div class="col col-sm-2 abbr"> <abbr class="badge rounded w-100" style="background-color:#DA7900"> <a href="https://roboticsconference.org/" rel="external nofollow noopener" target="_blank"> RSS </a> </abbr> </div> <div class="col-sm-10"> <div class="title">Collision-Affording Point Trees: SIMD-Amenable Nearest Neighbors for Fast Collision Checking</div> <div class="author"> <a href="https://claytonwramsey.com" class="nonmember" rel="external nofollow noopener" target="_blank">Clayton W.聽Ramsey</a>, <a href="/members/kingston">Zachary聽Kingston<sup>*</sup></a>, <a href="https://wbthomason.com/" class="nonmember" rel="external nofollow noopener" target="_blank">Wil聽Thomason<sup>*</sup></a>, and聽<a href="https://profiles.rice.edu/faculty/lydia-e-kavraki" class="nonmember" rel="external nofollow noopener" target="_blank">Lydia E.聽Kavraki</a> </div> <div class="periodical"> <em>In Robotics: Science and Systems</em> </div> <div class="periodical"> </div> <div class="links"> <a class="abstract btn btn-sm z-depth-0" role="button">Abs</a> <a class="bibtex btn btn-sm z-depth-0" role="button">Bib</a> <a href="https://arxiv.org/abs/2406.02807" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">PDF</a> <a href="https://www.youtube.com/watch?v=BzDKdrU1VpM" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Video</a> <a href="https://claytonwramsey.com/blog/captree" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Blog</a> <a href="https://github.com/kavrakilab/vamp" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Code</a> <a href="https://doi.org/10.15607/RSS.2024.XX.038" rel="external nofollow noopener" target="_blank"> <i class="ai ai-doi ml-1"> </i> </a> </div> <div class="abstract hidden"> <p>Motion planning against sensor data is often a critical bottleneck in real-time robot control. For sampling-based motion planners, which are effective for high-dimensional systems such as manipulators, the most time-intensive component is collision checking. We present a novel spatial data structure, the collision-affording point tree (CAPT): an exact representation of point clouds that accelerates collision-checking queries between robots and point clouds by an order of magnitude, with an average query time of less than 10 nanoseconds on 3D scenes comprising thousands of points. With the CAPT, sampling-based planners can generate valid, high-quality paths in under a millisecond, with total end-to-end computation time faster than 60 FPS, on a single thread of a consumer-grade CPU. We also present a point cloud filtering algorithm, based on space-filling curves, which reduces the number of points in a point cloud while preserving structure. Our approach enables robots to plan at real-time speeds in sensed environments, opening up potential uses of planning for high-dimensional systems in dynamic, changing, and unmodeled environments.</p> </div> <div class="bibtex hidden"> <figure class="highlight"><pre><code class="language-bibtex" data-lang="bibtex"><span class="nc">@inproceedings</span><span class="p">{</span><span class="nl">ramsey2024</span><span class="p">,</span> <span class="na">author</span> <span class="p">=</span> <span class="s">{Ramsey, Clayton W. and Kingston, Zachary and Thomason, Wil and Kavraki, Lydia E.}</span><span class="p">,</span> <span class="na">title</span> <span class="p">=</span> <span class="s">{Collision-Affording Point Trees: SIMD-Amenable Nearest Neighbors for Fast Collision Checking}</span><span class="p">,</span> <span class="na">booktitle</span> <span class="p">=</span> <span class="s">{Robotics: Science and Systems}</span><span class="p">,</span> <span class="na">year</span> <span class="p">=</span> <span class="s">{2024}</span><span class="p">,</span> <span class="na">doi</span> <span class="p">=</span> <span class="s">{10.15607/RSS.2024.XX.038}</span><span class="p">,</span> <span class="p">}</span></code></pre></figure> </div> </div> </div> </li> <li> <div class="row"> <a name="thomason2024vamp" class="anchor"></a> <div class="col col-sm-2 abbr"> <abbr class="badge rounded w-100" style="background-color:#daaa00"> <a href="https://ieee-icra.org" rel="external nofollow noopener" target="_blank"> ICRA </a> </abbr> </div> <div class="col-sm-10"> <div class="title">Motions in Microseconds via Vectorized Sampling-Based Planning</div> <div class="author"> <a href="https://wbthomason.com/" class="nonmember" rel="external nofollow noopener" target="_blank">Wil聽Thomason<sup>*</sup></a>, <a href="/members/kingston">Zachary聽Kingston<sup>*</sup></a>, and聽<a href="https://profiles.rice.edu/faculty/lydia-e-kavraki" class="nonmember" rel="external nofollow noopener" target="_blank">Lydia E.聽Kavraki</a> </div> <div class="periodical"> <em>In IEEE International Conference on Robotics and Automation</em> </div> <div class="periodical"> </div> <div class="links"> <a class="abstract btn btn-sm z-depth-0" role="button">Abs</a> <a class="bibtex btn btn-sm z-depth-0" role="button">Bib</a> <a href="https://arxiv.org/abs/2309.14545" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">PDF</a> <a href="https://github.com/kavrakilab/vamp" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Code</a> <a href="https://doi.org/10.1109/ICRA57147.2024.10611190" rel="external nofollow noopener" target="_blank"> <i class="ai ai-doi ml-1"> </i> </a> </div> <div class="abstract hidden"> <p>Modern sampling-based motion planning algorithms typically take between hundreds of milliseconds to dozens of seconds to find collision-free motions for high degree-of-freedom problems. This paper presents performance improvements of more than 500x over the state-of-the-art, bringing planning times into the range of microseconds and solution rates into the range of kilohertz, without specialized hardware. Our key insight is how to exploit fine-grained parallelism within sampling-based planners, providing generality-preserving algorithmic improvements to any such planner and significantly accelerating critical subroutines, such as forward kinematics and collision checking. We demonstrate our approach over a diverse set of challenging, realistic problems for complex robots ranging from 7 to 14 degrees-of-freedom. Moreover, we show that our approach does not require high-power hardware by also evaluating on a low-power single-board computer. The planning speeds demonstrated are fast enough to reside in the range of control frequencies and open up new avenues of motion planning research.</p> </div> <div class="bibtex hidden"> <figure class="highlight"><pre><code class="language-bibtex" data-lang="bibtex"><span class="nc">@inproceedings</span><span class="p">{</span><span class="nl">thomason2024vamp</span><span class="p">,</span> <span class="na">author</span> <span class="p">=</span> <span class="s">{Thomason, Wil and Kingston, Zachary and Kavraki, Lydia E.}</span><span class="p">,</span> <span class="na">title</span> <span class="p">=</span> <span class="s">{Motions in Microseconds via Vectorized Sampling-Based Planning}</span><span class="p">,</span> <span class="na">year</span> <span class="p">=</span> <span class="s">{2024}</span><span class="p">,</span> <span class="na">booktitle</span> <span class="p">=</span> <span class="s">{IEEE International Conference on Robotics and Automation}</span><span class="p">,</span> <span class="na">pages</span> <span class="p">=</span> <span class="s">{8749--8756}</span><span class="p">,</span> <span class="na">doi</span> <span class="p">=</span> <span class="s">{10.1109/ICRA57147.2024.10611190}</span><span class="p">,</span> <span class="p">}</span></code></pre></figure> </div> </div> </div> </li> <li> <div class="row"> <a name="kingston2023tro" class="anchor"></a> <div class="col col-sm-2 abbr"> <abbr class="badge rounded w-100" style="background-color:#9C793E"> <a href="https://www.ieee-ras.org/publications/t-ro" rel="external nofollow noopener" target="_blank"> T-RO </a> </abbr> <figure> <picture> <source class="responsive-img-srcset" srcset="/assets/img/publication_preview/r2_walking-480.webp 480w,/assets/img/publication_preview/r2_walking-800.webp 800w,/assets/img/publication_preview/r2_walking-1400.webp 1400w," sizes="200px" type="image/webp"> <img src="/assets/img/publication_preview/r2_walking.jpg" class="preview z-depth-1 rounded" width="100%" height="auto" alt="r2_walking.jpg" data-zoomable loading="eager" onerror="this.onerror=null; $('.responsive-img-srcset').remove();"> </source></picture> </figure> </div> <div class="col-sm-10"> <div class="title">Scaling Multimodal Planning: Using Experience and Informing Discrete Search</div> <div class="author"> <a href="/members/kingston">Zachary聽Kingston</a>聽and聽<a href="https://profiles.rice.edu/faculty/lydia-e-kavraki" class="nonmember" rel="external nofollow noopener" target="_blank">Lydia E.聽Kavraki</a> </div> <div class="periodical"> <em>IEEE Transactions on Robotics</em> </div> <div class="periodical"> </div> <div class="links"> <a class="abstract btn btn-sm z-depth-0" role="button">Abs</a> <a class="bibtex btn btn-sm z-depth-0" role="button">Bib</a> <a href="http://kavrakilab.org/publications/kingston2022-scaling-mmp.pdf" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">PDF</a> <a href="https://player.vimeo.com/video/743110686?loop=1&amp;color=ffffff&amp;byline=0&amp;portrait=0" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Video</a> <a href="https://doi.org/10.1109/TRO.2022.3197080" rel="external nofollow noopener" target="_blank"> <i class="ai ai-doi ml-1"> </i> </a> </div> <div class="abstract hidden"> <p>Robotic manipulation is inherently continuous, but typically has an underlying discrete structure, such as if an object is grasped. Many problems like these are multi-modal, such as pick-and-place tasks where every object grasp and placement is a mode. Multi-modal problems require finding a sequence of transitions between modes - for example, a particular sequence of object picks and placements. However, many multi-modal planners fail to scale when motion planning is difficult (e.g., in clutter) or the task has a long horizon (e.g., rearrangement). This work presents solutions for multi-modal scalability in both these areas. For motion planning, we present an experience-based planning framework ALEF which reuses experience from similar modes both online and from training data. For task satisfaction, we present a layered planning approach that uses a discrete lead to bias search towards useful mode transitions, informed by weights over mode transitions. Together, these contributions enable multi-modal planners to tackle complex manipulation tasks that were previously infeasible or inefficient, and provide significant improvements in scenes with high-dimensional robots.</p> </div> <div class="bibtex hidden"> <figure class="highlight"><pre><code class="language-bibtex" data-lang="bibtex"><span class="nc">@article</span><span class="p">{</span><span class="nl">kingston2023tro</span><span class="p">,</span> <span class="na">author</span> <span class="p">=</span> <span class="s">{Kingston, Zachary and Kavraki, Lydia E.}</span><span class="p">,</span> <span class="na">journal</span> <span class="p">=</span> <span class="s">{IEEE Transactions on Robotics}</span><span class="p">,</span> <span class="na">title</span> <span class="p">=</span> <span class="s">{Scaling Multimodal Planning: Using Experience and Informing Discrete Search}</span><span class="p">,</span> <span class="na">year</span> <span class="p">=</span> <span class="s">{2023}</span><span class="p">,</span> <span class="na">volume</span> <span class="p">=</span> <span class="s">{39}</span><span class="p">,</span> <span class="na">number</span> <span class="p">=</span> <span class="s">{1}</span><span class="p">,</span> <span class="na">pages</span> <span class="p">=</span> <span class="s">{128--146}</span><span class="p">,</span> <span class="na">doi</span> <span class="p">=</span> <span class="s">{10.1109/TRO.2022.3197080}</span><span class="p">,</span> <span class="p">}</span></code></pre></figure> </div> </div> </div> </li> <li> <div class="row"> <a name="kingston2019imacs" class="anchor"></a> <div class="col col-sm-2 abbr"> <abbr class="badge rounded w-100" style="background-color:#7D615D"> <a href="https://journals.sagepub.com/home/ijr" rel="external nofollow noopener" target="_blank"> IJRR </a> </abbr> <figure> <picture> <source class="responsive-img-srcset" srcset="/assets/img/publication_preview/parallel-480.webp 480w,/assets/img/publication_preview/parallel-800.webp 800w,/assets/img/publication_preview/parallel-1400.webp 1400w," sizes="200px" type="image/webp"> <img src="/assets/img/publication_preview/parallel.jpg" class="preview z-depth-1 rounded" width="100%" height="auto" alt="parallel.jpg" data-zoomable loading="eager" onerror="this.onerror=null; $('.responsive-img-srcset').remove();"> </source></picture> </figure> </div> <div class="col-sm-10"> <div class="title">Exploring Implicit Spaces for Constrained Sampling-Based Planning</div> <div class="author"> <a href="/members/kingston">Zachary聽Kingston</a>, <a href="https://moll.ai/" class="nonmember" rel="external nofollow noopener" target="_blank">Mark聽Moll</a>, and聽<a href="https://profiles.rice.edu/faculty/lydia-e-kavraki" class="nonmember" rel="external nofollow noopener" target="_blank">Lydia E.聽Kavraki</a> </div> <div class="periodical"> <em>The International Journal of Robotics Research</em> </div> <div class="periodical"> </div> <div class="links"> <a class="abstract btn btn-sm z-depth-0" role="button">Abs</a> <a class="bibtex btn btn-sm z-depth-0" role="button">Bib</a> <a href="http://kavrakilab.org/publications/kingston2019exploring-implicit-spaces-for-constrained.pdf" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">PDF</a> <a href="https://ompl.kavrakilab.org/constrainedPlanning.html" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Code</a> <a href="https://doi.org/10.1177/0278364919868530" rel="external nofollow noopener" target="_blank"> <i class="ai ai-doi ml-1"> </i> </a> </div> <div class="abstract hidden"> <p>We present a review and reformulation of manifold constrained sampling-based motion planning within a unifying framework, IMACS (implicit manifold configuration space). IMACS enables a broad class of motion planners to plan in the presence of manifold constraints, decoupling the choice of motion planning algorithm and method for constraint adherence into orthogonal choices. We show that implicit configuration spaces defined by constraints can be presented to sampling-based planners by addressing two key fundamental primitives, sampling and local planning, and that IMACS preserves theoretical properties of probabilistic completeness and asymptotic optimality through these primitives. Within IMACS, we implement projection- and continuation-based methods for constraint adherence, and demonstrate the framework on a range of planners with both methods in simulated and realistic scenarios. Our results show that the choice of method for constraint adherence depends on many factors and that novel combinations of planners and methods of constraint adherence can be more effective than previous approaches. Our implementation of IMACS is open source within the Open Motion Planning Library and is easily extended for novel planners and constraint spaces.</p> </div> <div class="bibtex hidden"> <figure class="highlight"><pre><code class="language-bibtex" data-lang="bibtex"><span class="nc">@article</span><span class="p">{</span><span class="nl">kingston2019imacs</span><span class="p">,</span> <span class="na">author</span> <span class="p">=</span> <span class="s">{Kingston, Zachary and Moll, Mark and Kavraki, Lydia E.}</span><span class="p">,</span> <span class="na">title</span> <span class="p">=</span> <span class="s">{Exploring Implicit Spaces for Constrained Sampling-Based Planning}</span><span class="p">,</span> <span class="na">journal</span> <span class="p">=</span> <span class="s">{The International Journal of Robotics Research}</span><span class="p">,</span> <span class="na">year</span> <span class="p">=</span> <span class="s">{2019}</span><span class="p">,</span> <span class="na">volume</span> <span class="p">=</span> <span class="s">{38}</span><span class="p">,</span> <span class="na">number</span> <span class="p">=</span> <span class="s">{10--11}</span><span class="p">,</span> <span class="na">pages</span> <span class="p">=</span> <span class="s">{1151--1178}</span><span class="p">,</span> <span class="na">month</span> <span class="p">=</span> <span class="nv">sep</span><span class="p">,</span> <span class="na">doi</span> <span class="p">=</span> <span class="s">{10.1177/0278364919868530}</span><span class="p">,</span> <span class="p">}</span></code></pre></figure> </div> </div> </div> </li> </ol> </div> </article> </div> </div> <footer class="sticky-bottom mt-5" role="contentinfo"> <div class="container"> CoMMA Lab @ <a href="https://www.purdue.edu/" rel="external nofollow noopener" target="_blank">Purdue University</a>, <a href="https://www.cs.purdue.edu/" rel="external nofollow noopener" target="_blank">Department of Computer Science</a>. 漏 Copyright 2025. Last updated: March 20, 2025. </div> </footer> <script src="https://cdn.jsdelivr.net/npm/jquery@3.6.0/dist/jquery.min.js" integrity="sha256-/xUj+3OJU5yExlq6GSYGSHk7tPXikynS7ogEvDej/m4=" crossorigin="anonymous"></script> <script src="/assets/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/mdbootstrap@4.20.0/js/mdb.min.js" integrity="sha256-NdbiivsvWt7VYCt6hYNT3h/th9vSTL4EDWeGs5SN3DA=" crossorigin="anonymous"></script> <script defer src="https://cdn.jsdelivr.net/npm/masonry-layout@4.2.2/dist/masonry.pkgd.min.js" integrity="sha256-Nn1q/fx0H7SNLZMQ5Hw5JLaTRZp0yILA/FRexe19VdI=" crossorigin="anonymous"></script> <script defer src="https://cdn.jsdelivr.net/npm/imagesloaded@5.0.0/imagesloaded.pkgd.min.js" integrity="sha256-htrLFfZJ6v5udOG+3kNLINIKh2gvoKqwEhHYfTTMICc=" crossorigin="anonymous"></script> <script defer src="/assets/js/masonry.js" type="text/javascript"></script> <script defer src="https://cdn.jsdelivr.net/npm/medium-zoom@1.1.0/dist/medium-zoom.min.js" integrity="sha256-ZgMyDAIYDYGxbcpJcfUnYwNevG/xi9OHKaR/8GK+jWc=" crossorigin="anonymous"></script> <script defer src="/assets/js/zoom.js?85ddb88934d28b74e78031fd54cf8308"></script> <script src="/assets/js/no_defer.js?2781658a0a2b13ed609542042a859126"></script> <script defer src="/assets/js/common.js?e0514a05c5c95ac1a93a8dfd5249b92e"></script> <script defer src="/assets/js/copy_code.js?12775fdf7f95e901d7119054556e495f" type="text/javascript"></script> <script defer src="/assets/js/jupyter_new_tab.js?d9f17b6adc2311cbabd747f4538bb15f"></script> <script async src="https://d1bxh8uas1mnw7.cloudfront.net/assets/embed.js"></script> <script async src="https://badge.dimensions.ai/badge.js"></script> <script type="text/javascript">window.MathJax={tex:{tags:"ams"}};</script> <script defer type="text/javascript" id="MathJax-script" src="https://cdn.jsdelivr.net/npm/mathjax@3.2.0/es5/tex-mml-chtml.min.js" integrity="sha256-rjmgmaB99riUNcdlrDtcAiwtLIojSxNyUFdl+Qh+rB4=" crossorigin="anonymous"></script> <script defer src="https://cdnjs.cloudflare.com/polyfill/v3/polyfill.min.js?features=es6" crossorigin="anonymous"></script> <script type="text/javascript">function progressBarSetup(){"max"in document.createElement("progress")?(initializeProgressElement(),$(document).on("scroll",function(){progressBar.attr({value:getCurrentScrollPosition()})}),$(window).on("resize",initializeProgressElement)):(resizeProgressBar(),$(document).on("scroll",resizeProgressBar),$(window).on("resize",resizeProgressBar))}function getCurrentScrollPosition(){return $(window).scrollTop()}function initializeProgressElement(){let e=$("#navbar").outerHeight(!0);$("body").css({"padding-top":e}),$("progress-container").css({"padding-top":e}),progressBar.css({top:e}),progressBar.attr({max:getDistanceToScroll(),value:getCurrentScrollPosition()})}function getDistanceToScroll(){return $(document).height()-$(window).height()}function resizeProgressBar(){progressBar.css({width:getWidthPercentage()+"%"})}function getWidthPercentage(){return getCurrentScrollPosition()/getDistanceToScroll()*100}const progressBar=$("#progress");window.onload=function(){setTimeout(progressBarSetup,50)};</script> <script src="/assets/js/shortcut-key.js"></script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10