CINXE.COM

Search

<!DOCTYPE html> <html lang="en" class="no-js"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes"> <title>Search</title> <meta id="meta-title" property="citation_title" content="Search"/> <meta id="og-title" property="og:title" content="Search"/> <meta name="twitter:widgets:autoload" content="off"/> <meta name="twitter:dnt" content="on"/> <meta name="twitter:widgets:csp" content="on"/> <meta name="google-site-verification" content="lQbRRf0vgPqMbnbCsgELjAjIIyJjiIWo917M7hBshvI"/> <meta id="og-image" property="og:image" content="https://escholarship.org/images/escholarship-facebook2.jpg"/> <meta id="og-image-width" property="og:image:width" content="1242"/> <meta id="og-image-height" property="og:image:height" content="1242"/> <link rel="stylesheet" href="/css/main-6e346ed4504727cd.css"> <link rel="resource" type="application/l10n" href="/node_modules/pdfjs-embed2/dist/locale/locale.properties"> <noscript><style> .jsonly { display: none } </style></noscript> <!-- Matomo --> <!-- TBD Configure Matomo for SPA https://developer.matomo.org/guides/spa-tracking --> <script type="text/plain" data-type="application/javascript" data-name="matomo"> var _paq = window._paq = window._paq || []; /* tracker methods like "setCustomDimension" should be called before "trackPageView" */ _paq.push(['trackPageView']); _paq.push(['enableLinkTracking']); (function() { var u="//matomo.cdlib.org/"; _paq.push(['setTrackerUrl', u+'matomo.php']); _paq.push(['setSiteId', '7']); var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0]; g.async=true; g.src=u+'matomo.js'; s.parentNode.insertBefore(g,s); console.log('*** MATOMO LOADED ***'); })(); </script> <!-- End Matomo Code --> </head> <body> <div id="main"><div data-reactroot=""><div class="body"><a href="#maincontent" class="c-skipnav">Skip to main content</a><div class="l_search"><div><div style="margin-top:-10px"></div><header id="#top" class="c-header"><a class="c-header__logo2" href="/"><picture><source srcSet="/images/logo_eschol-small.svg" media="(min-width: 870px)"/><img src="/images/logo_eschol-mobile.svg" alt="eScholarship"/></picture><div class="c-header__logo2-tagline">Open Access Publications from the University of California</div></a><div class="c-header__search"><form class="c-search1"><label class="c-search1__label" for="c-search1__field">search</label><input type="search" id="c-search1__field" name="q" class="c-search1__field" placeholder="Search over 500,000 items" autoCapitalize="off" value="author:Kong, Zhaodan"/><button type="submit" class="c-search1__submit-button" aria-label="submit search"></button><button type="button" class="c-search1__search-close-button" aria-label="close search field"></button></form></div><button class="c-header__search-open-button" aria-label="open search field"></button></header></div><div class="c-navbar"><nav class="c-nav"><details open="" class="c-nav__main"><summary class="c-nav__main-button">Menu</summary><ul class="c-nav__main-items"><li><details class="c-nav__sub"><summary class="c-nav__sub-button">About eScholarship</summary><div class="c-nav__sub-items"><button class="c-nav__sub-items-button" aria-label="return to menu">Main Menu</button><ul><li><a href="/aboutEschol">About eScholarship</a></li><li><a href="/repository">eScholarship Repository</a></li><li><a href="/publishing">eScholarship Publishing</a></li><li><a href="/policies">Site policies</a></li><li><a href="/terms">Terms of Use and Copyright Information</a></li><li><a href="/privacyPolicy">Privacy statement</a></li></ul></div></details></li><li><details class="c-nav__sub"><summary class="c-nav__sub-button">Campus Sites</summary><div class="c-nav__sub-items"><button class="c-nav__sub-items-button" aria-label="return to menu">Main Menu</button><ul><li><a href="/uc/ucb">UC Berkeley</a></li><li><a href="/uc/ucd">UC Davis</a></li><li><a href="/uc/uci">UC Irvine</a></li><li><a href="/uc/ucla">UCLA</a></li><li><a href="/uc/ucm">UC Merced</a></li><li><a href="/uc/ucr">UC Riverside</a></li><li><a href="/uc/ucsd">UC San Diego</a></li><li><a href="/uc/ucsf">UCSF</a></li><li><a href="/uc/ucsb">UC Santa Barbara</a></li><li><a href="/uc/ucsc">UC Santa Cruz</a></li><li><a href="/uc/ucop">UC Office of the President</a></li><li><a href="/uc/lbnl">Lawrence Berkeley National Laboratory</a></li><li><a href="/uc/anrcs">UC Agriculture &amp; Natural Resources</a></li></ul></div></details></li><li><a href="/ucoapolicies">UC Open Access Policies</a></li><li><a href="/publishing">eScholarship Publishing</a></li><li><a href="/ucpubs">UCPUBS</a></li></ul></details></nav></div><form id="facetForm" class="c-columns"><aside><div><div class="c-filter"><h1 class="c-filter__heading">Your search: "author:Kong, Zhaodan"</h1><input type="hidden" name="q" value="author:Kong, Zhaodan"/><div class="c-filter__results">19<!-- --> results</div><div class="c-filter__inactive-note">No filters applied</div><details class="c-filter__active" open=""><summary><span><strong></strong> filter<!-- -->s<!-- --> applied</span></summary><button class="c-filter__clear-all">clear all</button><ul class="c-filter__active-list"></ul></details><a href="https://help.escholarship.org/support/solutions/articles/9000148939-using-advanced-search-beta-" class="c-filter__tips">Search tips</a></div><div class="c-refine--has-drawer"><button class="c-refine__button--open">Refine Results</button><button class="c-refine__button--close" hidden="">Back to Results</button><div class="c-refine__drawer--closed"><details class="c-facetbox" open=""><summary class="c-facetbox__summary"><span id="facetbox0">Type of Work</span></summary><fieldset aria-labelledby="facetbox0"><ul class="c-checkbox"><li class=""><input type="checkbox" id="type_of_work-article" class="c-checkbox__input" name="type_of_work" value="article"/><label for="type_of_work-article" class="c-checkbox__label">Article<!-- --> (<!-- -->15<!-- -->)</label></li><li class=""><input type="checkbox" id="type_of_work-monograph" class="c-checkbox__input" name="type_of_work" value="monograph"/><label for="type_of_work-monograph" class="c-checkbox__label">Book<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="type_of_work-dissertation" class="c-checkbox__input" name="type_of_work" value="dissertation"/><label for="type_of_work-dissertation" class="c-checkbox__label">Theses<!-- --> (<!-- -->4<!-- -->)</label></li><li class=""><input type="checkbox" id="type_of_work-multimedia" class="c-checkbox__input" name="type_of_work" value="multimedia"/><label for="type_of_work-multimedia" class="c-checkbox__label">Multimedia<!-- --> (<!-- -->0<!-- -->)</label></li></ul></fieldset></details><details class="c-facetbox" open=""><summary class="c-facetbox__summary"><span id="facetbox1">Peer Review</span></summary><fieldset aria-labelledby="facetbox1"><ul class="c-checkbox"><li class=""><input type="checkbox" id="peer_reviewed-1" class="c-checkbox__input" name="peer_reviewed" value="1"/><label for="peer_reviewed-1" class="c-checkbox__label">Peer-reviewed only<!-- --> (<!-- -->19<!-- -->)</label></li></ul></fieldset></details><details class="c-facetbox"><summary class="c-facetbox__summary"><span id="facetbox2">Supplemental Material</span></summary><fieldset aria-labelledby="facetbox2"><ul class="c-checkbox--2column"><li class=""><input type="checkbox" id="supp_file_types-video" class="c-checkbox__input" name="supp_file_types" value="video"/><label for="supp_file_types-video" class="c-checkbox__label">Video<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="supp_file_types-audio" class="c-checkbox__input" name="supp_file_types" value="audio"/><label for="supp_file_types-audio" class="c-checkbox__label">Audio<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="supp_file_types-images" class="c-checkbox__input" name="supp_file_types" value="images"/><label for="supp_file_types-images" class="c-checkbox__label">Images<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="supp_file_types-zip" class="c-checkbox__input" name="supp_file_types" value="zip"/><label for="supp_file_types-zip" class="c-checkbox__label">Zip<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="supp_file_types-other_files" class="c-checkbox__input" name="supp_file_types" value="other files"/><label for="supp_file_types-other_files" class="c-checkbox__label">Other files<!-- --> (<!-- -->0<!-- -->)</label></li></ul></fieldset></details><details class="c-facetbox"><summary class="c-facetbox__summary"><span id="facetbox3">Publication Year</span></summary><fieldset aria-labelledby="facetbox3"><div class="c-pubyear"><div class="c-pubyear__field"><label for="c-pubyear__textfield1">From:</label><input type="text" id="c-pubyear__textfield1" name="pub_year_start" maxLength="4" placeholder="1900" value=""/></div><div class="c-pubyear__field"><label for="c-pubyear__textfield2">To:</label><input type="text" id="c-pubyear__textfield2" name="pub_year_end" maxLength="4" placeholder="2025" value=""/></div><button class="c-pubyear__button">Apply</button></div></fieldset></details><details class="c-facetbox"><summary class="c-facetbox__summary"><span id="facetbox4">Campus</span></summary><fieldset aria-labelledby="facetbox4"><ul class="c-checkbox"><li class=""><input type="checkbox" id="campuses-ucb" class="c-checkbox__input" name="campuses" value="ucb"/><label for="campuses-ucb" class="c-checkbox__label">UC Berkeley<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-ucd" class="c-checkbox__input" name="campuses" value="ucd"/><label for="campuses-ucd" class="c-checkbox__label">UC Davis<!-- --> (<!-- -->19<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-uci" class="c-checkbox__input" name="campuses" value="uci"/><label for="campuses-uci" class="c-checkbox__label">UC Irvine<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-ucla" class="c-checkbox__input" name="campuses" value="ucla"/><label for="campuses-ucla" class="c-checkbox__label">UCLA<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-ucm" class="c-checkbox__input" name="campuses" value="ucm"/><label for="campuses-ucm" class="c-checkbox__label">UC Merced<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-ucr" class="c-checkbox__input" name="campuses" value="ucr"/><label for="campuses-ucr" class="c-checkbox__label">UC Riverside<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-ucsd" class="c-checkbox__input" name="campuses" value="ucsd"/><label for="campuses-ucsd" class="c-checkbox__label">UC San Diego<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-ucsf" class="c-checkbox__input" name="campuses" value="ucsf"/><label for="campuses-ucsf" class="c-checkbox__label">UCSF<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-ucsb" class="c-checkbox__input" name="campuses" value="ucsb"/><label for="campuses-ucsb" class="c-checkbox__label">UC Santa Barbara<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-ucsc" class="c-checkbox__input" name="campuses" value="ucsc"/><label for="campuses-ucsc" class="c-checkbox__label">UC Santa Cruz<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-ucop" class="c-checkbox__input" name="campuses" value="ucop"/><label for="campuses-ucop" class="c-checkbox__label">UC Office of the President<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-lbnl" class="c-checkbox__input" name="campuses" value="lbnl"/><label for="campuses-lbnl" class="c-checkbox__label">Lawrence Berkeley National Laboratory<!-- --> (<!-- -->0<!-- -->)</label></li><li class=""><input type="checkbox" id="campuses-anrcs" class="c-checkbox__input" name="campuses" value="anrcs"/><label for="campuses-anrcs" class="c-checkbox__label">UC Agriculture &amp; Natural Resources<!-- --> (<!-- -->0<!-- -->)</label></li></ul></fieldset></details><details class="c-facetbox"><summary class="c-facetbox__summary"><span id="facetbox5">Department</span></summary><fieldset aria-labelledby="facetbox5"><ul class="c-checkbox"></ul></fieldset></details><details class="c-facetbox"><summary class="c-facetbox__summary"><span id="facetbox6">Journal</span></summary><fieldset aria-labelledby="facetbox6"><ul class="c-checkbox"></ul></fieldset></details><details class="c-facetbox"><summary class="c-facetbox__summary"><span id="facetbox7">Discipline</span></summary><fieldset aria-labelledby="facetbox7"><ul class="c-checkbox"></ul></fieldset></details><details class="c-facetbox"><summary class="c-facetbox__summary"><span id="facetbox8">Reuse License</span></summary><fieldset aria-labelledby="facetbox8"><ul class="c-checkbox"><li class="c-checkbox__attrib-cc-by"><input type="checkbox" id="rights-CC_BY" class="c-checkbox__input" name="rights" value="CC BY"/><label for="rights-CC_BY" class="c-checkbox__label">BY - Attribution required<!-- --> (<!-- -->4<!-- -->)</label></li></ul></fieldset></details></div></div><button type="submit" id="facet-form-submit" style="display:none">Search</button></div></aside><main id="maincontent"><section class="o-columnbox1"><header><h2 class="o-columnbox1__heading" aria-live="polite">Scholarly Works (<!-- -->19 results<!-- -->)</h2></header><div class="c-sortpagination"><div class="c-sort"><div class="o-input__droplist1"><label for="c-sort1">Sort By:</label><select name="sort" id="c-sort1" form="facetForm"><option selected="" value="rel">Relevance</option><option value="a-title">A-Z By Title</option><option value="z-title">Z-A By Title</option><option value="a-author">A-Z By Author</option><option value="z-author">Z-A By Author</option><option value="asc">Date Ascending</option><option value="desc">Date Descending</option></select></div><div class="o-input__droplist1 c-sort__page-input"><label for="c-sort2">Show:</label><select name="rows" id="c-sort2" form="facetForm"><option selected="" value="10">10</option><option value="20">20</option></select></div></div><input type="hidden" name="start" form="facetForm" value="0"/><nav class="c-pagination"><ul><li><a href="" aria-label="you are on result set 1" class="c-pagination__item--current">1</a></li><li><a href="" aria-label="go to result set 2" class="c-pagination__item">2</a></li></ul></nav></div><section class="c-scholworks"><div class="c-scholworks__main-column"><ul class="c-scholworks__tag-list"><li class="c-scholworks__tag-thesis">Thesis</li><li class="c-scholworks__tag-peer">Peer Reviewed</li></ul><div><h3 class="c-scholworks__heading"><a href="/uc/item/5mh206ch"><div class="c-clientmarkup">Demonstration of a digital twin of a laser ablated aluminum alloy 6061 disk for fault detection and process control.</div></a></h3></div><div class="c-authorlist"><ul class="c-authorlist__list"><li class="c-authorlist__begin"><a href="/search/?q=author%3AMatthews%2C%20Thomas">Matthews, Thomas</a> </li><li class="c-authorlist__begin"><span class="c-authorlist__heading">Advisor(s):</span> <a href="/search/?q=author%3AKong%2C%20Zhaodan">Kong, Zhaodan</a> </li></ul></div><div class="c-scholworks__publication"><a href="/uc/ucd_etd">UC Davis Electronic Theses and Dissertations</a> (<!-- -->2023<!-- -->)</div><div class="c-scholworks__abstract"><div class="c-clientmarkup"><p>With recent advances in computing, communication, sensor, and actuator technologies, it has become possible to create virtual representations of physical objects that can communicate with the real world. These connected virtual representations are a new technology, called digital twins, with the possibly to transform numerous industries including: manufacturing, aerospace, and medicine. While there are many examples of digital twins already, the current body of work is insufficient with the majority of the published work on digital twins featuring frameworks or demonstrations that lack the physical component. A set of functional digital twins of laser ablated disks are presented here to demonstrate how one can be used in a machining process. The virtual representations of the disks are a geometric model built in COMSOL Multiphysics庐 that defines the shape of the disk. The digital twins showcased are able to detect potential faults and control the ablation process. The fault detection was done using images of the plasma plumes generated during the process and modifications to the virtual representation were used to guide the amount of energy used by the laser ablation system to remove material from the physical disk鈥檚 surface. In addition to demonstrating how the digital twins can be used as part of the process, three test cases are presented to display how the digital twins compared to their physical counterparts.</p></div></div><div class="c-scholworks__media"><ul class="c-medialist"></ul></div></div><div class="c-scholworks__ancillary"><a class="c-scholworks__thumbnail" href="/uc/item/5mh206ch"><img src="/cms-assets/7a3d9a9c6c0f5ff33e2f192ce195ce31145e534c6f86aea2b6e206278bbec5e0" alt="Cover page: Demonstration of a digital twin of a laser ablated aluminum alloy 6061 disk for fault detection and process control."/></a></div></section><section class="c-scholworks"><div class="c-scholworks__main-column"><ul class="c-scholworks__tag-list"><li class="c-scholworks__tag-article">Article</li><li class="c-scholworks__tag-peer">Peer Reviewed</li></ul><div><h3 class="c-scholworks__heading"><a href="/uc/item/0jv1b1m8"><div class="c-clientmarkup">Multi-Agent Cooperative Pursuit-Defense Strategy Against One Single Attacker</div></a></h3></div><div class="c-authorlist"><ul class="c-authorlist__list"><li class="c-authorlist__begin"><a href="/search/?q=author%3ADeng%2C%20Ziquan">Deng, Ziquan</a>; </li><li class="c-authorlist__end"><a href="/search/?q=author%3AKong%2C%20Zhaodan">Kong, Zhaodan</a> </li></ul></div><div class="c-scholworks__publication"><a href="/uc/ucd_postprints">UC Davis Previously Published Works</a> (<!-- -->2020<!-- -->)</div><div class="c-scholworks__abstract"><div class="c-clientmarkup">Multiple-player games involving cooperative and adversarial agents are a type of problems of great practical significance. In this letter, we consider an attack-defense game with a single attacker and multiple defenders. The attacker attempts to enter a protected region, while the defenders attempt to defend the same region and capture the attacker outside the region. We propose a distributed pursuit-defense strategy for the defenders' cooperative defense against the attacker. Inside a bounded, convex, two-dimensional space, the defenders choose among an area-decreasing, a distance-decreasing, or a pursuing strategy. We prove that our strategy guarantees the attacker to be captured before entering the protected region in a finite time. We also demonstrate with simulations that a human-controlled attacker is unable to enter a protected region when multiple defenders are using our pursuit-defense strategy.</div></div><div class="c-scholworks__media"><ul class="c-medialist"></ul></div></div><div class="c-scholworks__ancillary"><a class="c-scholworks__thumbnail" href="/uc/item/0jv1b1m8"><img src="/cms-assets/36d6cee3b4f07f55e8ed6a9a01e3e01ec7d412ef043102b4b9e212df62a6a06d" alt="Cover page: Multi-Agent Cooperative Pursuit-Defense Strategy Against One Single Attacker"/></a></div></section><section class="c-scholworks"><div class="c-scholworks__main-column"><ul class="c-scholworks__tag-list"><li class="c-scholworks__tag-thesis">Thesis</li><li class="c-scholworks__tag-peer">Peer Reviewed</li></ul><div><h3 class="c-scholworks__heading"><a href="/uc/item/6zb8r46w"><div class="c-clientmarkup">Vision-Based Unmanned Aerial Vehicle Navigation in Virtual Complex Environment using Deep Reinforcement Learning</div></a></h3></div><div class="c-authorlist"><ul class="c-authorlist__list"><li class="c-authorlist__begin"><a href="/search/?q=author%3ALiang%2C%20Jiawei">Liang, Jiawei</a> </li><li class="c-authorlist__begin"><span class="c-authorlist__heading">Advisor(s):</span> <a href="/search/?q=author%3AKong%2C%20Zhaodan">Kong, Zhaodan</a> </li></ul></div><div class="c-scholworks__publication"><a href="/uc/ucd_etd">UC Davis Electronic Theses and Dissertations</a> (<!-- -->2021<!-- -->)</div><div class="c-scholworks__abstract"><div class="c-clientmarkup"><p>From driverless vehicles to mars rovers, autonomous navigation and task-taking are undoubtedlythe future of robotics. Over the recent years, research in Deep Reinforcement Learning (DRL) has grown in many areas of navigation including unmanned aerial vehicles (UAVs). Most of them are far from realistic and assume perfect state observations. In many real-life scenarios like search and rescue in a complex and cluttered environment, GPS-denial tends to become a problem. Therefore, this research is interested in vision-based navigation and obstacle avoidance in realistic environments.</p><p>More specifically, this thesis aims to address the following research tasks: 1) To investigatethe vision-based navigation of UAV in GPS-denied synthetic environments. This work will utilize a Variational Autoencoder (VAE) to improve sample efficiency, and develop a Proximal Policy Optimization (PPO) agent that can trace rivers in photo-realistic simulations. 2) To conduct reward shaping to deal with vision-based problems. Developing the correct reward function leads to the desired agent behavior, but it is a challenging task in vision-based learning. 3) To validate the PPO agent performance and compare it with another agent trained with imitation learning (IL). The evaluation metrics include the average distance traveled per episode, distance away from center of river, and standard deviation of actions taken.</p></div></div><div class="c-scholworks__media"><ul class="c-medialist"></ul></div></div><div class="c-scholworks__ancillary"><a class="c-scholworks__thumbnail" href="/uc/item/6zb8r46w"><img src="/cms-assets/e96edebb5d69e3adb937b092498dde7fd86b07718c50e439e9bd0b203616bba7" alt="Cover page: Vision-Based Unmanned Aerial Vehicle Navigation in Virtual Complex Environment using Deep Reinforcement Learning"/></a></div></section><section class="c-scholworks"><div class="c-scholworks__main-column"><ul class="c-scholworks__tag-list"><li class="c-scholworks__tag-thesis">Thesis</li><li class="c-scholworks__tag-peer">Peer Reviewed</li></ul><div><h3 class="c-scholworks__heading"><a href="/uc/item/6735d0s8"><div class="c-clientmarkup">A Cognitively Informed and Network Based Investigation of Human Neural Activities, Behaviors, and Performance in Human-Autonomy Teaming Tasks</div></a></h3></div><div class="c-authorlist"><ul class="c-authorlist__list"><li class="c-authorlist__begin"><a href="/search/?q=author%3ABales%2C%20Gregory">Bales, Gregory</a> </li><li class="c-authorlist__begin"><span class="c-authorlist__heading">Advisor(s):</span> <a href="/search/?q=author%3AKong%2C%20Zhaodan">Kong, Zhaodan</a> </li></ul></div><div class="c-scholworks__publication"><a href="/uc/ucd_etd">UC Davis Electronic Theses and Dissertations</a> (<!-- -->2023<!-- -->)</div><div class="c-scholworks__abstract"><div class="c-clientmarkup"><p>Human-autonomy teams are expected to provide solutions in a wide range of applications, such as human directed search and rescue, hazard containment and mobilization, and space exploration. These teams consist of autonomous agents that coordinate their actions with the human partner to achieve common goals. Despite the advancements of current autonomous systems, it is the human's ability to engage their knowledge and expertise that makes human-autonomy teams especially effective in tasks dominated by dynamic and uncertain conditions. The human and their autonomous teammate should have shared plans and a similar focus of attention. However, studies have shown that a human's miscomprehension of an autonomous system's state, decisions, or course of action can result in misuse or disuse of the agent, causing a reduction in team performance. The aim of this dissertation is to improve human-autonomy team task proficiency by investigating methods to measure changes in human cognitive state as reflected in neurophysiological measures using methods derived from network science. This work is comprised of two primary studies. In the first study, we examined human behaviors and brain activity acquired via electroencephalography (EEG) to probe the interactions between cognitive processes, behaviors, and performance in a human-multiagent team task. We showed that measurable changes in brain activity indicate a higher burden on the cognitive resources associated with visual-spatial reasoning required to estimate a more complex kinematic state of robotic agents. These conclusions were reinforced by complementary behavioral shifts in gaze and pilot inputs. Next, we showed that EEG inter-channel connectivity network metrics distinguish gaze behaviors associated with the attention process more effectively than traditional single-channel features. In the second study we explored the relationship between neurophysiological features and human trust in an autonomous system while performing a team task. Trust prediction models were constructed using a variety of feature types determined from an EEG timeseries. A comparison of model performance between traditional EEG signal powers with inter-channel connectivity network metrics revealed that measures of dynamic changes in synchronous behavior between distant brain regions can capture cognitive activities that predict a human's trust in an autonomous system. We showed that both single-channel powers and network-metrics defined from brain regions associated with reasoning and attention have the greatest impact on trust prediction. In a third study, we explore the interaction between behaviors and performance for subjects of various skills in a manual grinding task. We show that there were observable and distinguishable sensorimotor behaviors associated with two distinct techniques utilized by the individual subjects, and that task performance is affected by these techniques.</p></div></div><div class="c-scholworks__media"><ul class="c-medialist"></ul></div></div><div class="c-scholworks__ancillary"><a class="c-scholworks__thumbnail" href="/uc/item/6735d0s8"><img src="/cms-assets/9f674c943b64e66095a7c0b92d431c8e76ab03fa8ed6a54c5883fcbf699f72d9" alt="Cover page: A Cognitively Informed and Network Based Investigation of Human Neural Activities, Behaviors, and Performance in Human-Autonomy Teaming Tasks"/></a></div></section><section class="c-scholworks"><div class="c-scholworks__main-column"><ul class="c-scholworks__tag-list"><li class="c-scholworks__tag-thesis">Thesis</li><li class="c-scholworks__tag-peer">Peer Reviewed</li></ul><div><h3 class="c-scholworks__heading"><a href="/uc/item/7qf3r2f3"><div class="c-clientmarkup">Data-Driven Modeling and High-Performance Control of Multirotor Unmanned Aerial Vehicles in Challenging Environments</div></a></h3></div><div class="c-authorlist"><ul class="c-authorlist__list"><li class="c-authorlist__begin"><a href="/search/?q=author%3AWei%2C%20Peng">Wei, Peng</a> </li><li class="c-authorlist__begin"><span class="c-authorlist__heading">Advisor(s):</span> <a href="/search/?q=author%3AKong%2C%20Zhaodan">Kong, Zhaodan</a> </li></ul></div><div class="c-scholworks__publication"><a href="/uc/ucd_etd">UC Davis Electronic Theses and Dissertations</a> (<!-- -->2023<!-- -->)</div><div class="c-scholworks__abstract"><div class="c-clientmarkup"><p>Multirotor unmanned aerial vehicles (UAVs) have gained significant popularity in recent years due to their high maneuverability and vertical take-off and landing capability. The new roles require that the future multirotor UAVs will need to fly in a variety of challenging environments and the flight performance may significantly degrade due to the shift from nominal flight conditions. As their usage expands to increasingly challenging environments, the need for reliable and high-performance flight behavior becomes more pressing. The dissertation addresses these difficulties through a series of research efforts aimed at improving the overall flight performance of multirotor UAVs in challenging conditions. </p><p>First, an experimental study was conducted to identify a data-driven ground effect model for a small quadcopter, which takes into account the interference among the rotors and was validated through flight experiments. An adaptive control scheme was then developed to counter the model uncertainty resulting from the complex aerodynamics, leading to improved command tracking performance when the UAV is in the ground effect region. The effectiveness of the developed controller was demonstrated on a real quadcopter, with results showing superior performance compared to a traditional PID controller. </p><p>Second, the effect of wind on a hovering octocopter was investigated and modeled through field experiments. A data-driven approach was used to model the wind effects on the bare airframe by directly measuring the wind and including it as a control input. A state space model that explicitly considers the wind effect was identified from real flight data using a system identification approach. The validation results show that a significant error reduction can be achieved by considering wind effects and adding a correction term. The identified model can serve as a foundation for the future development of model-based controllers for outdoor multirotor aircraft, enhancing their flight performance in windy conditions. </p><p>Lastly, a vision-based control solution was developed in order to navigate the UAVs inside complex, unstructured, and GPS-denied environments. The proposed solution leverages imitation learning and a variational autoencoder neural network to enable the autonomous agent to learn reactive strategies from human experience effectively and efficiently. The learning frame- work and the developed controller were demonstrated in simulated riverine environments first and then validated in a real orchard on a custom-built quadcopter, with results outperforming existing baseline algorithms. The proposed vision-based control solution is expected to significantly enhance the performance of multirotor UAVs in complex and GPS-denied environments, where traditional navigation methods may not be applicable.</p></div></div><div class="c-scholworks__media"><ul class="c-medialist"></ul></div></div><div class="c-scholworks__ancillary"><a class="c-scholworks__thumbnail" href="/uc/item/7qf3r2f3"><img src="/cms-assets/4408c3c4e3692ea85f4170e8621c1b91daa95d16b4d13f699e9176d0a1751811" alt="Cover page: Data-Driven Modeling and High-Performance Control of Multirotor Unmanned Aerial Vehicles in Challenging Environments"/></a></div></section><section class="c-scholworks"><div class="c-scholworks__main-column"><ul class="c-scholworks__tag-list"><li class="c-scholworks__tag-article">Article</li><li class="c-scholworks__tag-peer">Peer Reviewed</li></ul><div><h3 class="c-scholworks__heading"><a href="/uc/item/4wc4n1pw"><div class="c-clientmarkup">Formal interpretation of cyber-physical system performance with temporal logic</div></a></h3></div><div class="c-authorlist"><ul class="c-authorlist__list"><li class="c-authorlist__begin"><a href="/search/?q=author%3AChen%2C%20Gang">Chen, Gang</a>; </li><li><a href="/search/?q=author%3ASabato%2C%20Zachary">Sabato, Zachary</a>; </li><li class="c-authorlist__end"><a href="/search/?q=author%3AKong%2C%20Zhaodan">Kong, Zhaodan</a> </li></ul></div><div class="c-scholworks__publication"><a href="/uc/ucd_postprints">UC Davis Previously Published Works</a> (<!-- -->2018<!-- -->)</div><div class="c-scholworks__abstract"><div class="c-clientmarkup">The inherent and increasing complexity of many cyber-physical systems (CPSs) makes it challenging for human users or designers to comprehend and interpret their performance. This issue, without proper attention paid, may lead to unwanted and even catastrophic consequences, particularly with safety-critical CPSs. This paper presents a new methodology of enabling (i) a human to interrogate a CPS by inquiring with questions written in formal logic and (ii) the CPS to interpret its performance precisely in the context of the inquiry. This formal interpretation problem is first formulated as temporal logic inference problem, which, aided by the concept of robustness degree, can be converted into an optimisation problem with probably approximately correct solutions. A new Gaussian-process-based active learning algorithm is then proposed to address the potential computational budget issue arising from solving the optimisation problem. Both theoretical and empirical analyses are carried out to demonstrate the performance of the proposed algorithm. Finally, a detailed case study on automotive mechatronic design is provided to showcase the proposed formal interpretation methodology.</div></div><div class="c-scholworks__media"><ul class="c-medialist"></ul></div></div><div class="c-scholworks__ancillary"><a class="c-scholworks__thumbnail" href="/uc/item/4wc4n1pw"><img src="/cms-assets/01829dfdba71fea37835a8cd284f1d2946d2081dbc94b31b5b36b92bd47b8431" alt="Cover page: Formal interpretation of cyber-physical system performance with temporal logic"/></a></div></section><section class="c-scholworks"><div class="c-scholworks__main-column"><ul class="c-scholworks__tag-list"><li class="c-scholworks__tag-article">Article</li><li class="c-scholworks__tag-peer">Peer Reviewed</li></ul><div><h3 class="c-scholworks__heading"><a href="/uc/item/3174b5wf"><div class="c-clientmarkup">Data-Driven Real-Valued Timed-Failure-Propagation-Graph Refinement for Complex System Fault Diagnosis</div></a></h3></div><div class="c-authorlist"><ul class="c-authorlist__list"><li class="c-authorlist__begin"><a href="/search/?q=author%3AChen%2C%20Gang">Chen, Gang</a>; </li><li><a href="/search/?q=author%3ALin%2C%20Xinfan">Lin, Xinfan</a>; </li><li class="c-authorlist__end"><a href="/search/?q=author%3AKong%2C%20Zhaodan">Kong, Zhaodan</a> </li></ul></div><div class="c-scholworks__publication"><a href="/uc/ucd_postprints">UC Davis Previously Published Works</a> (<!-- -->2021<!-- -->)</div><div class="c-scholworks__abstract"><div class="c-clientmarkup">Timed Failure Propagation Graphs (TFPGs) have been widely used for the failure modeling and diagnosis of safety-critical systems. Currently most TFPGs are manually constructed by system experts, a process that can be time-consuming, error-prone, and even impossible for systems with highly nonlinear and machine-learning-based components. This letter proposes a new type of TFPGs, called Real-Valued Timed Failure Propagation Graphs (rTFPGs), designed for continuous-state systems. More importantly, it presents a systematic way of constructing rTFPGs by combining the powers of human experts and data-driven methods: first, an expert constructs a partial rTFPG based on his/her expertise; then a data-driven algorithm refines the rTFPG by adding nodes and edges based on a given set of labeled signals. The proposed approach has been successfully implemented and evaluated on three case studies.</div></div><div class="c-scholworks__media"><ul class="c-medialist"></ul></div></div><div class="c-scholworks__ancillary"><a class="c-scholworks__thumbnail" href="/uc/item/3174b5wf"><img src="/cms-assets/4e7cffd5f0dc8474213fea4ed7bd0548083e6444e2a38818d3b6ab9caf33bd81" alt="Cover page: Data-Driven Real-Valued Timed-Failure-Propagation-Graph Refinement for Complex System Fault Diagnosis"/></a></div></section><section class="c-scholworks"><div class="c-scholworks__main-column"><ul class="c-scholworks__tag-list"><li class="c-scholworks__tag-article">Article</li><li class="c-scholworks__tag-peer">Peer Reviewed</li></ul><div><h3 class="c-scholworks__heading"><a href="/uc/item/8j0502xs"><div class="c-clientmarkup">Temporal Logics for Learning and Detection of Anomalous Behavior</div></a></h3></div><div class="c-authorlist"><ul class="c-authorlist__list"><li class="c-authorlist__begin"><a href="/search/?q=author%3AKong%2C%20Zhaodan">Kong, Zhaodan</a>; </li><li><a href="/search/?q=author%3AJones%2C%20Austin">Jones, Austin</a>; </li><li class="c-authorlist__end"><a href="/search/?q=author%3ABelta%2C%20Calin">Belta, Calin</a> </li></ul></div><div class="c-scholworks__publication"><a href="/uc/ucd_postprints">UC Davis Previously Published Works</a> (<!-- -->2017<!-- -->)</div><div class="c-scholworks__abstract"><div class="c-clientmarkup">The increased complexity of modern systems necessitates automated anomaly detection methods to detect possible anomalous behavior determined by malfunctions or external attacks. We present formal methods for inferring (via supervised learning) and detecting (via unsupervised learning) anomalous behavior. Our procedures use data to construct a signal temporal logic (STL) formula that describes normal system behavior. This logic can be used to formulate properties such as 'If the train brakes within 500 m of the platform at a speed of 50 km/hr, then it will stop in at least 30 s and at most 50 s.' Our procedure infers not only the physical parameters involved in the formula (e.g., 500 m in the example above) but also its logical structure. STL gives a more human-readable representation of behavior than classifiers represented as surfaces in high-dimensional feature spaces. The learned formula enables us to perform early detection by using monitoring techniques and anomaly mitigation by using formal synthesis techniques. We demonstrate the power of our methods with examples of naval surveillance and a train braking system.</div></div><div class="c-scholworks__media"><ul class="c-medialist"></ul></div></div><div class="c-scholworks__ancillary"><a class="c-scholworks__thumbnail" href="/uc/item/8j0502xs"><img src="/cms-assets/8583b9e69bcad48f9e5719fa5b10f48c63625115bba46f62707ac490ffab3351" alt="Cover page: Temporal Logics for Learning and Detection of Anomalous Behavior"/></a></div></section><section class="c-scholworks"><div class="c-scholworks__main-column"><ul class="c-scholworks__tag-list"><li class="c-scholworks__tag-article">Article</li><li class="c-scholworks__tag-peer">Peer Reviewed</li></ul><div><h3 class="c-scholworks__heading"><a href="/uc/item/8t7967wf"><div class="c-clientmarkup">Integrating Operator Information for Manual Grinding and Characterization of Process Performance Based on Operator Profile</div></a></h3></div><div class="c-authorlist"><ul class="c-authorlist__list"><li class="c-authorlist__begin"><a href="/search/?q=author%3ADas%2C%20Jayanti">Das, Jayanti</a>; </li><li><a href="/search/?q=author%3ABales%2C%20Gregory%20L">Bales, Gregory L</a>; </li><li><a href="/search/?q=author%3AKong%2C%20Zhaodan">Kong, Zhaodan</a>; </li><li class="c-authorlist__end"><a href="/search/?q=author%3ALinke%2C%20Barbara">Linke, Barbara</a> </li></ul></div><div class="c-scholworks__publication"><a href="/uc/ucd_postprints">UC Davis Previously Published Works</a> (<!-- -->2018<!-- -->)</div><div class="c-scholworks__abstract"><div class="c-clientmarkup">Due to its high versatility and scalability, manual grinding is an important and widely used technology in production for rework, repair, deburring, and finishing of large or unique parts. To make the process more interactive and reliable, manual grinding needs to incorporate "skill-based design," which models a person-based system and can go significantly beyond the considerations of traditional human factors and ergonomics to encompass both processing parameters (e.g., feed rate, tool path, applied forces, material removal rate (MRR)), and machined surface quality (e.g., surface roughness). This study quantitatively analyzes the characteristics of complex techniques involved in manual operations. A series of experiments have been conducted using subjects of different levels of skill, while analyzing their visual gaze, cutting force, tool path, and workpiece quality. Analysis of variance (ANOVA) and multivariate regression analysis were performed and showed that the unique behavior of the operator affects the process performance measures of specific energy consumption and MRR. In the future, these findings can be used to predict product quality and instruct new practitioners.</div></div><div class="c-scholworks__media"><ul class="c-medialist"></ul></div></div><div class="c-scholworks__ancillary"><a class="c-scholworks__thumbnail" href="/uc/item/8t7967wf"><img src="/cms-assets/a184d3be14f1e89f49647c2f5e9dffafcc84ca1b06dc7ff50270619ae5704fce" alt="Cover page: Integrating Operator Information for Manual Grinding and Characterization of Process Performance Based on Operator Profile"/></a></div></section><section class="c-scholworks"><div class="c-scholworks__main-column"><ul class="c-scholworks__tag-list"><li class="c-scholworks__tag-article">Article</li><li class="c-scholworks__tag-peer">Peer Reviewed</li></ul><div><h3 class="c-scholworks__heading"><a href="/uc/item/3q7528c0"><div class="c-clientmarkup">Perceiving Artistic Expression: A Formal Exploration of Performance Art Salsa</div></a></h3></div><div class="c-authorlist"><ul class="c-authorlist__list"><li class="c-authorlist__begin"><a href="/search/?q=author%3A%C3%96zcimder%2C%20Kayhan">脰zcimder, Kayhan</a>; </li><li><a href="/search/?q=author%3AKong%2C%20Zhaodan">Kong, Zhaodan</a>; </li><li><a href="/search/?q=author%3AWang%2C%20Shuai">Wang, Shuai</a>; </li><li class="c-authorlist__end"><a href="/search/?q=author%3ABaillieull%2C%20John">Baillieull, John</a> </li></ul></div><div class="c-scholworks__publication"><a href="/uc/ucd_postprints">UC Davis Previously Published Works</a> (<!-- -->2018<!-- -->)</div><div class="c-scholworks__abstract"><div class="c-clientmarkup">This paper studies artistic expression in human movement by exploring the performance art form salsa. The motions of a salsa performance are constructed as concatenations of motion primitives, each of which specifies the movement of the dance pair over the course of eight musical beats. To analyze the syntax of artistic expression, the choreography of dance performances is represented by a transition model that is based on humanoid robot representations of the dancers. In order to assess the quality of a performance, two distinct metrics are explored. By integrating the performance metrics into the proposed transition system, it is possible to create an algorithm that is capable of autonomously recognizing the dance moves and evaluating the quality of the performance with a score. To validate the model, a dance pair performed four distinct salsa dance sequences observed by an artificially intelligent (AI) judge. The video recordings of the performances are also shown to a dance audience for evaluation. By looking at the correlation between the dance audience and the AI judge's scores, we conclude that the proposed model performs well in evaluating the artistic merit of the dance.</div></div><div class="c-scholworks__media"><ul class="c-medialist"></ul></div></div><div class="c-scholworks__ancillary"><a class="c-scholworks__thumbnail" href="/uc/item/3q7528c0"><img src="/cms-assets/bf7920f5874a4cbc4296f0a0e6c6473f59b5b04d64790e9c55a972a33fd26c9c" alt="Cover page: Perceiving Artistic Expression: A Formal Exploration of Performance Art Salsa"/></a></div></section><nav class="c-pagination"><ul><li><a href="" aria-label="you are on result set 1" class="c-pagination__item--current">1</a></li><li><a href="" aria-label="go to result set 2" class="c-pagination__item">2</a></li></ul></nav></section></main></form></div><div><div class="c-toplink"><a href="javascript:window.scrollTo(0, 0)">Top</a></div><footer class="c-footer"><nav class="c-footer__nav"><ul><li><a href="/">Home</a></li><li><a href="/aboutEschol">About eScholarship</a></li><li><a href="/campuses">Campus Sites</a></li><li><a href="/ucoapolicies">UC Open Access Policy</a></li><li><a href="/publishing">eScholarship Publishing</a></li><li><a href="https://www.cdlib.org/about/accessibility.html">Accessibility</a></li><li><a href="/privacypolicy">Privacy Statement</a></li><li><a href="/policies">Site Policies</a></li><li><a href="/terms">Terms of Use</a></li><li><a href="/login"><strong>Admin Login</strong></a></li><li><a href="https://help.escholarship.org"><strong>Help</strong></a></li></ul></nav><div class="c-footer__logo"><a href="/"><img class="c-lazyimage" data-src="/images/logo_footer-eschol.svg" alt="eScholarship, University of California"/></a></div><div class="c-footer__copyright">Powered by the<br/><a href="http://www.cdlib.org">California Digital Library</a><br/>Copyright 漏 2017<br/>The Regents of the University of California</div></footer></div></div></div></div> <script src="/js/vendors~app-bundle-2aefc956e545366a5d4e.js"></script> <script src="/js/app-bundle-3c8ebc2ec05dcc3202fd.js"></script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10