CINXE.COM
Review of Machine Learning for Micro-Electronic Design Verification
<!DOCTYPE html> <html lang="en"> <head> <meta content="text/html; charset=utf-8" http-equiv="content-type"/> <title>Review of Machine Learning for Micro-Electronic Design Verification</title> <!--Generated on Wed Mar 5 15:05:08 2025 by LaTeXML (version 0.8.8) http://dlmf.nist.gov/LaTeXML/.--> <!--Document created on March 5, 2025.--> <meta content="width=device-width, initial-scale=1, shrink-to-fit=no" name="viewport"/> <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/ar5iv.0.7.9.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/ar5iv-fonts.0.7.9.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/latexml_styles.css" rel="stylesheet" type="text/css"/> <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/1.3.3/html2canvas.min.js"></script> <script src="/static/browse/0.3.4/js/addons_new.js"></script> <script src="/static/browse/0.3.4/js/feedbackOverlay.js"></script> <base href="/html/2503.11687v1/"/></head> <body> <nav class="ltx_page_navbar"> <nav class="ltx_TOC"> <ol class="ltx_toclist"> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S1" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">1 </span>Introduction</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S1.SS1" title="In 1 Introduction ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">1.1 </span>Scope</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S1.SS1.SSS1" title="In 1.1 Scope ‣ 1 Introduction ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">1.1.1 </span>Exclusions</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S1.SS1.SSS2" title="In 1.1 Scope ‣ 1 Introduction ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">1.1.2 </span>Inclusions</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S1.SS2" title="In 1 Introduction ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">1.2 </span>Contributions</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S1.SS3" title="In 1 Introduction ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">1.3 </span>Review Structure</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S2" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2 </span>Paper Collection and Methodology</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S2.SS1" title="In 2 Paper Collection and Methodology ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1 </span>Methodology Used to Collect Material</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S2.SS2" title="In 2 Paper Collection and Methodology ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.2 </span>Research Questions</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S3" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3 </span>Background</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S3.SS1" title="In 3 Background ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.1 </span>Verification in the Digital Design Process</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S3.SS2" title="In 3 Background ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.2 </span>Coverage Models and Closure</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S3.SS3" title="In 3 Background ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.3 </span>Testing in Dynamic-Based Verification</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S3.SS4" title="In 3 Background ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.4 </span>The Verification Environment</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S3.SS5" title="In 3 Background ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.5 </span>The Challenge of Coverage-Directed Verification</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S4" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4 </span>The Distribution of Research by Topic</span></a></li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S5" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5 </span>Use Cases, Benefits and Desirable Qualities</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S5.SS1" title="In 5 Use Cases, Benefits and Desirable Qualities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.1 </span>Applications for ML in Simulation-Based Testing</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S5.SS2" title="In 5 Use Cases, Benefits and Desirable Qualities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.2 </span>Benefits of Using Machine Learning in Microelectronic Design Verification</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S5.SS3" title="In 5 Use Cases, Benefits and Desirable Qualities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.3 </span>Qualities of a Test Bench</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S6" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6 </span>Training and Learning Methods</span></a></li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7 </span>The Use of Machine Learning for Coverage Closure</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS1" title="In 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.1 </span>Coverage Models</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS2" title="In 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.2 </span>The ML-Enhanced Verification Environment</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS3" title="In 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.3 </span>The Application of ML to Coverage Closure</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS4" title="In 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.4 </span>Test Generation</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS4.SSS1" title="In 7.4 Test Generation ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.4.1 </span>Machine Learning Types</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS4.SSS2" title="In 7.4 Test Generation ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.4.2 </span>Benefits of ML for Generative Techniques</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS4.SSS3" title="In 7.4 Test Generation ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.4.3 </span>Challenges for Using ML to Generate Tests</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS5" title="In 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.5 </span>Test Direction</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS5.SSS1" title="In 7.5 Test Direction ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.5.1 </span>Machine Learning Types</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS5.SSS2" title="In 7.5 Test Direction ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.5.2 </span>Benefits of using ML to Direct Testing</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS5.SSS3" title="In 7.5 Test Direction ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.5.3 </span>Challenges for using ML to Direct Testing</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS6" title="In 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.6 </span>Test Selection</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS6.SSS1" title="In 7.6 Test Selection ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.6.1 </span>Machine Learning Types</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS6.SSS2" title="In 7.6 Test Selection ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.6.2 </span>Benefits of Using ML to Select Tests</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS7" title="In 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.7 </span>Level of Control</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS8" title="In 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7.8 </span>The Use of Machine Learning for Coverage Collection and Analysis</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S8" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">8 </span>The Use of Machine Learning For Bug Hunting</span></a></li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S9" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">9 </span>The Use of Machine Learning for Fault Detection</span></a></li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S10" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">10 </span>The Use of Machine Learning For Test Set Optimisation</span></a></li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">11 </span>Evaluation of Machine Learning in Dynamic Verification</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11.SS1" title="In 11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">11.1 </span>Designs, Test Suites and Benchmarks</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11.SS2" title="In 11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">11.2 </span>Measuring Performance</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11.SS2.SSS1" title="In 11.2 Measuring Performance ‣ 11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">11.2.1 </span>Metrics</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11.SS2.SSS2" title="In 11.2 Measuring Performance ‣ 11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">11.2.2 </span>Baselines</span></a></li> </ol> </li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S12" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">12 </span>Challenges and Opportunities</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S12.SS1" title="In 12 Challenges and Opportunities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">12.1 </span>Existing Industry Practice</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S12.SS2" title="In 12 Challenges and Opportunities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">12.2 </span>Similarities with Test-Based Software Verification</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S12.SS3" title="In 12 Challenges and Opportunities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">12.3 </span>Evaluating the Strengths and Weaknesses of ML Techniques</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S12.SS4" title="In 12 Challenges and Opportunities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">12.4 </span>Use of Open Source Designs and Datasets</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S12.SS5" title="In 12 Challenges and Opportunities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">12.5 </span>The Prevalence of Open Source Designs in Commerical Products</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S13" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">13 </span>Challenges for Future Research</span></a></li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S14" title="In Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">14 </span>Acknowledgments</span></a></li> </ol></nav> </nav> <div class="ltx_page_main"> <div class="ltx_page_content"> <article class="ltx_document ltx_authors_1line ltx_fleqn"> <div class="ltx_para" id="p1"> <span class="ltx_ERROR undefined" id="p1.1">\DraftwatermarkOptions</span> <p class="ltx_p" id="p1.2">stamp=false</p> </div> <h1 class="ltx_title ltx_title_document">Review of Machine Learning for Micro-Electronic Design Verification</h1> <div class="ltx_authors"> <span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Christopher Bennett </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation">christopher.bennett@bristol.ac.uk </span></span></span> <span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Kerstin Eder </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"> </span></span></span> </div> <div class="ltx_dates">(March 5, 2025)</div> <div class="ltx_abstract"> <h6 class="ltx_title ltx_title_abstract">Abstract</h6> <p class="ltx_p" id="1.1">Microelectronic design verification remains a critical bottleneck in device development, traditionally mitigated by expanding verification teams and computational resources. Since the late 1990s, machine learning (ML) has been proposed to enhance verification efficiency, yet many techniques have not achieved mainstream adoption. This review, from the perspective of verification and ML practitioners, examines the application of ML in dynamic-based techniques for functional verification of microelectronic designs, and provides a starting point for those new to this interdisciplinary field. Historical trends, techniques, ML types, and evaluation baselines are analysed to understand why previous research has not been widely adopted in industry. The review highlights the application of ML, the techniques used and critically discusses their limitations and successes. Although there is a wealth of promising research, real-world adoption is hindered by challenges in comparing techniques, identifying suitable applications, and the expertise required for implementation. This review proposes that the field can progress through the creation and use of open datasets, common benchmarks, and verification targets. By establishing open evaluation criteria, industry can guide future research. Parallels with ML in software verification suggest potential for collaboration. Additionally, greater use of open-source designs and verification environments can allow more researchers from outside the hardware verification discipline to contribute to the challenge of verifying microelectronic designs.</p> </div> <div class="ltx_classification"> <h6 class="ltx_title ltx_title_classification">keywords: </h6>Machine Learning, EDA, Microelectronics, Functional Verification </div> <section class="ltx_section" id="S1" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">1 </span>Introduction</h2> <div class="ltx_para" id="S1.p1"> <p class="ltx_p" id="S1.p1.1">The production of micro-electronic devices is a multi-billion-pound industry where the cost of design errors found after tape-out is high. As a result, sources suggest that up to 70% of development time in a microelectronic design project is invested in verification to find bugs before production <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib35" title="">35</a>]</cite>. Historically, step changes in verification techniques have enabled the electronics industry to keep pace with the greater complexity of electronic designs. For instance, using simulation-based verification to support manual inspection, use of hardware emulation to speed up simulations, introducing UVM to standardise the way verification environments are built and reusued, and using constrained random instead of expert design instruction sequences. The EDA (Electronic Design Automation) verification industry is now asking whether Machine Learning will be the next step change.</p> </div> <div class="ltx_para" id="S1.p2"> <p class="ltx_p" id="S1.p2.1">The rising cost and development time for microprocessor verification is driven by customer demand. Customers want devices with greater functionality, performance and lower cost. To meet these demands, microelectronic designs are becoming increasingly complex. The industry is seeing a trend towards system-on-chip designs and integrating heterogeneous components with multiple IPs from different manufacturers.</p> </div> <div class="ltx_para" id="S1.p3"> <p class="ltx_p" id="S1.p3.1">This complexity is compounded by often incomplete functional specifications, leading some to remark that device specifications are becoming more of a statement of intent rather than a rigorous design reference. Consequently, the likelihood of errors has increased at all stages of production due to misinterpretation of specifications and mistakes in design and synthesis. This places significant pressure on verification teams to ensure correct operation amid growing design complexity and higher error rates. The trend seen over the last decade of hiring more verification engineers and investing in costly simulation time <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib35" title="">35</a>]</cite> is not viewed as sustainable. As a result, many in the EDA industry look to machine learning to assist in the verification effort.</p> </div> <div class="ltx_para" id="S1.p4"> <p class="ltx_p" id="S1.p4.1">The increasing complexity of designs and rising cost of verification is not the only motivation for using machine learning. The creation of open-source designs, such as those based on RISC-V, have enabled non-specialists to create commercial chips. While the open-source movement fosters innovation, it also introduces risks. These non-specialists may lack the the verification expertise and resources of the traditional manufacturers, but the cost of design errors remains high. Therefore, the proliferation of open-source designs emphasises the need for design verification techniques that are efficient, effective and accessible. Machine learning is a tool with the potential to address these needs.</p> </div> <div class="ltx_para" id="S1.p5"> <p class="ltx_p" id="S1.p5.1">Machine learning involves making predictions and classifications based on data. The design and verification of electronic devices generate large amounts of often labelled data, including specifications, code, and test results. This makes machine learning well-suited for microprocessor verification. Recent advances in reinforcement learning <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib91" title="">91</a>]</cite> for gameplay and large language models for generative AI have garnered significant attention, leading to substantial interest from the EDA industry in using machine learning to reduce the time, cost, and bottlenecks associated with verification.</p> </div> <div class="ltx_para" id="S1.p6"> <p class="ltx_p" id="S1.p6.1">Although interest in this area is growing, it is not new. For over 20 years, both academia and industry have explored incorporating machine learning into the verification process. Despite this, the verification of electronic devices still relies heavily on expert-directed random simulations. The key question is why research in this area has struggled to gain adoption in real-world projects. This review aims to address this question. Specifically, it takes the perspective of an EDA practitioner, highlighting the verification challenges where machine learning has been applied, the techniques used, and critically discussing the limitations and successes.</p> </div> <div class="ltx_para" id="S1.p7"> <p class="ltx_p" id="S1.p7.1">Unlike recent reviews of machine learning in EDA that took a broad view of the EDA process, this review focuses specifically on the use of machine learning for the functional verification of pre-silicon designs using dynamic (simulation-based) techniques. Traditionally cited as the greatest bottleneck in microprocessor development, it is also an area where two decades of research have not translated to industrial practice.</p> </div> <div class="ltx_para" id="S1.p8"> <p class="ltx_p" id="S1.p8.1">Written for practitioners in both industry and academia, the review supports the future application of cutting-edge ML techniques to break through from concept to industrial practice.</p> </div> <section class="ltx_subsection" id="S1.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">1.1 </span>Scope</h3> <div class="ltx_para" id="S1.SS1.p1"> <p class="ltx_p" id="S1.SS1.p1.1">This review focuses on how machine learning can be used in a dynamic functional verification process for microelectronic designs. Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S1.F1" title="Figure 1 ‣ 1.1 Scope ‣ 1 Introduction ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">1</span></a> shows the range of verification activities and the scope of this review. Dynamic verification is distinguished by the use of test-methods based on applying random or directed stimulus in cycle or event driven simulations <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib71" title="">71</a>]</cite>. In the context of electronic design verification, these dynamic methods are distinct from static and formal verification methods, which instead use techniques including SAT and BDD Solvers, Theorem Proving, Property Checking, Model Checking and Formal Assertion Checking <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib71" title="">71</a>]</cite>. There are also hybrid methods that use a combination of both static and dynamic techniques, however these are not included in the review.</p> </div> <figure class="ltx_figure" id="S1.F1"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="501" id="S1.F1.g1" src="x1.png" width="664"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S1.F1.2.1.1" style="font-size:90%;">Figure 1</span>: </span><span class="ltx_text" id="S1.F1.3.2" style="font-size:90%;">The role of dynamic based test methods within verification and validation. Adapted from ISO/IEC/IEEE 29119-2:2013, “Software and systems engineering — Software testing — Part 1: Concepts and definitions”. The scope of the review is enclosed in the purple rectangle.</span></figcaption> </figure> <div class="ltx_para" id="S1.SS1.p2"> <p class="ltx_p" id="S1.SS1.p2.1">In the electronic design industry, dynamic verification is a process rather than a singular activity. Authors have expressed different views of the activities that constitute this process. For example, <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib56" title="">56</a>]</cite> describes the process as consisting of Stimulus & Test Generation, RTL modelling, Coverage Collection, Assertions Checking, Scoreboarding & Debugging. Whereas in <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib106" title="">106</a>]</cite> the process is described differently. Design Characterisation and Coverage Prediction are added as activities, Debugging is split into Detection, Localisation and Debug, and Assertion Checking and Scoreboarding are not included. Different definitions of the verification process are not surprising. Verification is a process to check the correctness of a device against its specifications. Therefore, the activities that constitute a dynamic verification process vary to reflect the needs of a specific project.</p> </div> <div class="ltx_para" id="S1.SS1.p3"> <p class="ltx_p" id="S1.SS1.p3.1">This review focuses on a critical part of the verification process that would be applicable, in whole or in part, to most projects. Specifically, the application of stimuli and recording of coverage. At its simplest, a design is simulated, and its output is recorded in response to various stimuli. If the response does not match the expected behaviour, then an error is recorded. The primary challenge for verification teams during this activity is to generate input stimuli that efficiently test a design against its specification.</p> </div> <section class="ltx_subsubsection" id="S1.SS1.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">1.1.1 </span>Exclusions</h4> <div class="ltx_para" id="S1.SS1.SSS1.p1"> <p class="ltx_p" id="S1.SS1.SSS1.p1.1">There has been research interest in using machine learning for hardware verification for approximately 20 years, resulting in a wide and varied literature. Exhaustively covering this literature is impractical. Therefore, we excluded some methods and activities in the dynamic-verification process where machine learning can be used.</p> </div> <div class="ltx_para" id="S1.SS1.SSS1.p2"> <p class="ltx_p" id="S1.SS1.SSS1.p2.1">The first exclusion is the use of machine learning with formal techniques, such as accelerating formal analysis and selecting the best formal technique to use. Formal techniques are an important part of block-level verification, especially for safety-critical designs, because they can exhaustively explore the state-space of a design. However, formal techniques do not currently scale to complex designs and are less widely used than dynamic techniques on industrial projects. Formal techniques also draw on a different set of analytical tools, including SAT solvers, and covering these would distract from the core aims of the review. Hybrid techniques that mix formal with dynamic techniques were not included for the same reason.</p> </div> <div class="ltx_para" id="S1.SS1.SSS1.p3"> <p class="ltx_p" id="S1.SS1.SSS1.p3.1">To ensure a focus on design-based verification, we excluded research related to hardware implementation. This includes work related to the use of ML for design analysis, such as predicting the physical area occupied by a design from its RTL description <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib114" title="">114</a>]</cite> and verifying layout <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib36" title="">36</a>]</cite>. Machine learning also has applications for activities that support finding errors, such as design emulation, creating test benches and creating coverage models from specifications. However, these are beyond the scope of this review. We also excluded material relating to troubleshooting since this is the step that occurs after the detection of an error. Troubleshooting includes the use of machine learning for triage, root-cause analysis and debug.</p> </div> <div class="ltx_para" id="S1.SS1.SSS1.p4"> <p class="ltx_p" id="S1.SS1.SSS1.p4.1">Finally, the review excludes using machine learning to verify non-functional specifications, including power, security and robustness to soft errors <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib65" title="">65</a>]</cite>. For instance, using ML to create a bespoke model of power use or find patterns in RTL code indicating trojan hardware <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib109" title="">109</a>]</cite> is excluded.</p> </div> </section> <section class="ltx_subsubsection" id="S1.SS1.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">1.1.2 </span>Inclusions</h4> <div class="ltx_para" id="S1.SS1.SSS2.p1"> <p class="ltx_p" id="S1.SS1.SSS2.p1.1">Traditionally, the scope of machine learning includes supervised, unsupervised and reinforcement learning techniques. In this review, we also chose to include the use of evolutionary algorithms in our definition of machine learning. These algorithms are heuristic-based searches and are not always covered by a definition of machine learning. However, the use of evolutionary techniques is common in research for dynamic-based verification, and excluding these techniques would prevent traditional machine learning being compared with the state of the art.</p> </div> <div class="ltx_para" id="S1.SS1.SSS2.p2"> <p class="ltx_p" id="S1.SS1.SSS2.p2.1">The scope of this review also encompasses a select number of machine learning applications that extend beyond traditional definitions of functional verification. This includes the closure of structural coverage models, such as finite state machines, and code coverage models, including branch and statement coverage. Additionally, applications of machine learning for test pattern generation using pre-silicon simulations are considered. The rationale for including these applications is that the machine learning techniques and methodologies involved are sufficiently similar to those used in functional verification, making them of interest to practitioners, even if the exact application may differ.</p> </div> </section> </section> <section class="ltx_subsection" id="S1.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">1.2 </span>Contributions</h3> <div class="ltx_para" id="S1.SS2.p1"> <p class="ltx_p" id="S1.SS2.p1.1">Machine learning has a long history in the verification of electronic hardware as an academic endeavour but not in widespread industry practice. Recent developments in machine learning have further propelled academic interest in the topic. However, there is a risk of perpetuating the status quo where developments in verification research fail to gain real-world adoption. This review aims to mitigate the risk by contributing a platform for both academic researchers and industry to understand the state of the art, helping researchers to understand the limitations of existing approaches and industry to find the material relevant to the specific challenges they face. It does so by taking a systematic, critical and detailed look at the research material from the perspective of an industry practitioner.</p> </div> <div class="ltx_para" id="S1.SS2.p2"> <p class="ltx_p" id="S1.SS2.p2.1">This review builds on previous surveys covering the use of machine learning in the electronic design process. Each of these surveys presented a different perspective, including large surveys covering the entire EDA process <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib51" title="">51</a>]</cite>, pre and post-silicon verification with a bias towards formal and hybrid techniques <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib58" title="">58</a>]</cite>, and the use of both static and dynamic techniques <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib106" title="">106</a>]</cite>. These large surveys have had a wide scope and tended towards high-level and broad observations of the state of the art. Supporting the large surveys are smaller surveys which focus on a single area for the use of machine learning in verification. For instance, the use of Reinforcement Learning, Neural Networks and Binary Differential Evolution Algorithms <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib99" title="">99</a>]</cite>, and the application of ML from an industry perspective <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib110" title="">110</a>]</cite>. There have been relatively few large surveys that specialised in one element of the hardware verification process. The closest are <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib54" title="">54</a>]</cite>, which does not cover the recent developments in machine learning, and <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib56" title="">56</a>]</cite> which has a similar scope and includes an in-depth discussion of neural network-based test generation techniques.</p> </div> <div class="ltx_para" id="S1.SS2.p3"> <p class="ltx_p" id="S1.SS2.p3.1">Unlike these prior works, this review takes a systematic, critical and detailed look at the use of current machine learning techniques to support simulation-based design verification, including a detailed examination of the how previous research has been evaluated. While previous surveys have forwarded an understanding of the breadth of Machine Learning in EDA design, the specialism of this review enables greater depth and analysis, crucial to ensuring the application of new ML techniques does not experience the limitations of prior work and that developments break through into industrial practice.</p> </div> <div class="ltx_para" id="S1.SS2.p4"> <p class="ltx_p" id="S1.SS2.p4.1">The use of Machine Learning in simulation-based verification is a large topic, and like previous surveys, this work does not claim to be exhaustive. However, we present and follow a systematic methodology to enable others to replicate our work and build upon it by expanding the analysis to new areas for the EDA process.</p> </div> <div class="ltx_para" id="S1.SS2.p5"> <p class="ltx_p" id="S1.SS2.p5.1">The review is written to support industry practitioners or academic researchers using ML in their verification activities. Consequently, unlike previous surveys, this review is written from the perspective of having a problem to solve and the need to understand the state of the art, limitations of techniques, and open challenges. This top-down approach enables discussion and navigation of the topic guided by need. It is distinct from a “bottom-up” approach that starts with a pool of literature and forms classifications based on them, which suits an understanding of the literature but is perhaps less helpful to a practitioner.</p> </div> <div class="ltx_para ltx_noindent" id="S1.SS2.p6"> <p class="ltx_p" id="S1.SS2.p6.1">In summary, the contributions of this review are:</p> <ul class="ltx_itemize" id="S1.I1"> <li class="ltx_item" id="S1.I1.ix1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S1.I1.ix1.p1"> <p class="ltx_p" id="S1.I1.ix1.p1.1">Written for industry practitioners looking to use ML in their verification activities and academics looking to understand the state of the art and open challenges.</p> </div> </li> <li class="ltx_item" id="S1.I1.ix2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S1.I1.ix2.p1"> <p class="ltx_p" id="S1.I1.ix2.p1.1">A specialist review of machine learning in dynamic-based micro-electronic design verification to enable greater depth, commentary and synthesis.</p> </div> </li> <li class="ltx_item" id="S1.I1.ix3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S1.I1.ix3.p1"> <p class="ltx_p" id="S1.I1.ix3.p1.1">Written from a top-down perspective starting with the industrial development process and need.</p> </div> </li> <li class="ltx_item" id="S1.I1.ix4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S1.I1.ix4.p1"> <p class="ltx_p" id="S1.I1.ix4.p1.1">A commentary on coverage models and evaluation is included. Both of these are crucial to assessing the success of simulation-based testing and have not been covered in previous work.</p> </div> </li> <li class="ltx_item" id="S1.I1.ix5" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S1.I1.ix5.p1"> <p class="ltx_p" id="S1.I1.ix5.p1.1">A clear methodology to collect prior work and a quantitative analysis to identify trends and gaps in the research.</p> </div> </li> </ul> </div> </section> <section class="ltx_subsection" id="S1.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">1.3 </span>Review Structure</h3> <div class="ltx_para" id="S1.SS3.p1"> <p class="ltx_p" id="S1.SS3.p1.1">The review is structured as follows. The methodology and scope of the review are given in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S2" title="2 Paper Collection and Methodology ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">2</span></a>. Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S3" title="3 Background ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">3</span></a> familiarises the reader with the core concepts necessary to understand the field of dynamic-based hardware verification. A quantitative assessment of the research material is given in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S4" title="4 The Distribution of Research by Topic ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">4</span></a>, followed in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S5" title="5 Use Cases, Benefits and Desirable Qualities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">5</span></a> by problems the research aims to address and the characteristics of an “ideal” dynamic test platform. Coverage models and the application of machine learning to coverage closure are discussed in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7" title="7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">7</span></a>, and Sections <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S8" title="8 The Use of Machine Learning For Bug Hunting ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">8</span></a> to <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S10" title="10 The Use of Machine Learning For Test Set Optimisation ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">10</span></a> discuss respectively the application of machine learning to finding bugs, detecting faults and optimising test sets. Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11" title="11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">11</span></a> discusses the hardware and metrics used by research to evaluate techniques. The review ends in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S12" title="12 Challenges and Opportunities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">12</span></a> with a summary of the open challenges and opportunities.</p> </div> </section> </section> <section class="ltx_section" id="S2" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">2 </span>Paper Collection and Methodology</h2> <section class="ltx_subsection" id="S2.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.1 </span>Methodology Used to Collect Material</h3> <div class="ltx_para" id="S2.SS1.p1"> <p class="ltx_p" id="S2.SS1.p1.1">The review adopts a methodology similar to that used in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib34" title="">34</a>]</cite> for a survey of machine learning in software verification. The prior art was sampled using a structured search of literature from the IEEE Xplore <span class="ltx_note ltx_role_footnote" id="footnote1"><sup class="ltx_note_mark">1</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">1</sup><span class="ltx_tag ltx_tag_note">1</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://ieeexplore.ieee.org/Xplore/home.jsp" title="">https://ieeexplore.ieee.org/Xplore/home.jsp</a></span></span></span> and Web of Science databases. Results were restricted to accessible material written in english. The format of the search string was <math alttext="\mathit{problem}\times\mathit{application}\times\mathit{technology}" class="ltx_Math" display="inline" id="S2.SS1.p1.1.m1.1"><semantics id="S2.SS1.p1.1.m1.1a"><mrow id="S2.SS1.p1.1.m1.1.1" xref="S2.SS1.p1.1.m1.1.1.cmml"><mi id="S2.SS1.p1.1.m1.1.1.2" xref="S2.SS1.p1.1.m1.1.1.2.cmml">𝑝𝑟𝑜𝑏𝑙𝑒𝑚</mi><mo id="S2.SS1.p1.1.m1.1.1.1" lspace="0.222em" rspace="0.222em" xref="S2.SS1.p1.1.m1.1.1.1.cmml">×</mo><mi id="S2.SS1.p1.1.m1.1.1.3" xref="S2.SS1.p1.1.m1.1.1.3.cmml">𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛</mi><mo id="S2.SS1.p1.1.m1.1.1.1a" lspace="0.222em" rspace="0.222em" xref="S2.SS1.p1.1.m1.1.1.1.cmml">×</mo><mi id="S2.SS1.p1.1.m1.1.1.4" xref="S2.SS1.p1.1.m1.1.1.4.cmml">𝑡𝑒𝑐ℎ𝑛𝑜𝑙𝑜𝑔𝑦</mi></mrow><annotation-xml encoding="MathML-Content" id="S2.SS1.p1.1.m1.1b"><apply id="S2.SS1.p1.1.m1.1.1.cmml" xref="S2.SS1.p1.1.m1.1.1"><times id="S2.SS1.p1.1.m1.1.1.1.cmml" xref="S2.SS1.p1.1.m1.1.1.1"></times><ci id="S2.SS1.p1.1.m1.1.1.2.cmml" xref="S2.SS1.p1.1.m1.1.1.2">𝑝𝑟𝑜𝑏𝑙𝑒𝑚</ci><ci id="S2.SS1.p1.1.m1.1.1.3.cmml" xref="S2.SS1.p1.1.m1.1.1.3">𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛</ci><ci id="S2.SS1.p1.1.m1.1.1.4.cmml" xref="S2.SS1.p1.1.m1.1.1.4">𝑡𝑒𝑐ℎ𝑛𝑜𝑙𝑜𝑔𝑦</ci></apply></annotation-xml><annotation encoding="application/x-tex" id="S2.SS1.p1.1.m1.1c">\mathit{problem}\times\mathit{application}\times\mathit{technology}</annotation><annotation encoding="application/x-llamapun" id="S2.SS1.p1.1.m1.1d">italic_problem × italic_application × italic_technology</annotation></semantics></math>.</p> <blockquote class="ltx_quote" id="S2.SS1.p1.2"> <p class="ltx_p" id="S2.SS1.p1.2.1">((”All Metadata”:rtl OR ”All Metadata”:eda OR ”All Metadata”:”functional verification” OR ”All Metadata”:”functional coverage”) AND (”All Metadata”:verification OR ”All Metadata”:validation) AND (”All Metadata”:”machine learning” OR ”All Metadata”:”reinforcement learning” OR ”All Metadata”:”deep learning” OR ”All Metadata”:”neural network” OR ”All Metadata”:bayesian) )</p> </blockquote> <p class="ltx_p" id="S2.SS1.p1.3">Material was also sampled from the proceedings of Design and Verification Conference and Exhibition USA because it is historically well supported by industry. Due to limitations in the search functionality, the following terms were searched independently and the results combined: <span class="ltx_text ltx_font_italic" id="S2.SS1.p1.3.1">coverage, machine learning, reinforcement learning, deep learning, neural network, Bayseian, genetic algorithm</span>.</p> </div> <figure class="ltx_figure" id="S2.F2"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="188" id="S2.F2.g1" src="x2.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F2.2.1.1" style="font-size:90%;">Figure 2</span>: </span><span class="ltx_text" id="S2.F2.3.2" style="font-size:90%;">Methodology used to filter search results.</span></figcaption> </figure> <div class="ltx_para" id="S2.SS1.p2"> <p class="ltx_p" id="S2.SS1.p2.1">The search initially returned 513 results. These were filtered by first removing any that did not relate to the electronic design <em class="ltx_emph ltx_font_italic" id="S2.SS1.p2.1.1">process</em>, including research that proposed hardware designs to accelerate machine learning algorithms. Next, papers were removed that were not primary research (including surveys and commentaries) or did not feature machine learning in the primary aim of the paper. Finally, work relating to physical hardware design, such as layout, routing, analogue modelling, or analogue design, or did not relate to verification were removed from the results. Decisions were based on the abstract, title, keywords and a paper’s introduction. When the classification of material was unclear, it was reviewed collaboratively between the authors. The remaining papers were read in detail, and relevant information was tabulated, including coverage models and the type of machine learning used. The resulting dataset (paper references, classifications and tables) is available from the corresponding author upon request and will be available for download at <a class="ltx_ref ltx_url ltx_font_typewriter" href="https://data.bris.ac.uk/data/" title="">https://data.bris.ac.uk/data/</a> in due course.</p> </div> </section> <section class="ltx_subsection" id="S2.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.2 </span>Research Questions</h3> <div class="ltx_para" id="S2.SS2.p1"> <p class="ltx_p" id="S2.SS2.p1.1">The research questions the review aims to answer are listed below. These support the high level aim of reviewing the state of the art for the use of machine learning (ML) in EDA verification.</p> <ul class="ltx_itemize" id="S2.I1"> <li class="ltx_item" id="S2.I1.ix1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S2.I1.ix1.p1"> <p class="ltx_p" id="S2.I1.ix1.p1.1">RQ1: How has ML been used to perform or enhance the dynamic-verification process for electronic designs?</p> </div> </li> <li class="ltx_item" id="S2.I1.ix2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S2.I1.ix2.p1"> <p class="ltx_p" id="S2.I1.ix2.p1.1">RQ2: How is the deployment of ML evaluated?</p> </div> </li> <li class="ltx_item" id="S2.I1.ix3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S2.I1.ix3.p1"> <p class="ltx_p" id="S2.I1.ix3.p1.1">RQ3: Which specific ML techniques were used to perform or enhance coverage closure?</p> </div> </li> <li class="ltx_item" id="S2.I1.ix4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S2.I1.ix4.p1"> <p class="ltx_p" id="S2.I1.ix4.p1.1">RQ4: What are the limitations and open challenges in integrating ML into EDA verification?</p> </div> </li> </ul> </div> </section> </section> <section class="ltx_section" id="S3" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">3 </span>Background</h2> <div class="ltx_para" id="S3.p1"> <p class="ltx_p" id="S3.p1.1">This section introduces dynamic-based verification concepts and terminology, particularly for those from a machine learning background. Experienced practitioners in microelectronic design and verification may wish to skip to Section 4. </p> </div> <section class="ltx_subsection" id="S3.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.1 </span>Verification in the Digital Design Process</h3> <div class="ltx_para" id="S3.SS1.p1"> <p class="ltx_p" id="S3.SS1.p1.1">The digital design process is divided into Front-End and Back-End activities <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib22" title="">22</a>]</cite>. Front-End activities focus on what the design will do, while Back-End activities determine how it will do it. During the Front-End stage, the design’s functional behaviour is developed according to its specification and represented at different levels of abstraction. There are three levels in common use: Register-Transfer (RTL), gate, and transistor <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib76" title="">76</a>]</cite>. Some authors also include a higher level of abstraction called behavioural representation, written in high-level languages like SystemC or C++. The Back-End stage transforms the abstract design into a physically implementable form through activities such as floorplanning, placement, routing, and timing analysis.</p> </div> <div class="ltx_para" id="S3.SS1.p2"> <p class="ltx_p" id="S3.SS1.p2.1">Verification is a process to establish the correctness of a device against its specification throughout the design process. This review focuses on verification during the Front-End stage, we refer to this as <span class="ltx_text ltx_font_bold" id="S3.SS1.p2.1.1">functional verification</span> to emphasise it aims to check a design’s behaviour rather than its implementation. Descriptions of the modern functional verification process are given in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib22" title="">22</a>]</cite> and <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib32" title="">32</a>]</cite>. Here, a device is referred to as the Design Under Verification (DUV) to emphasise it is a design description, not a physical device. The term Design Under Test (DUT) is also used in the literature.</p> </div> <div class="ltx_para" id="S3.SS1.p3"> <p class="ltx_p" id="S3.SS1.p3.1">There are three common types of verification commonly used to establish functional correctness: dynamic, hybrid, and static verification. Dynamic verification applies stimulus to a simulation of a design and checks if the design’s output matches the specification. Static verification uses analytical methods like model checking which do not simulate the design. Static methods can exhaustively prove a design’s behaviour for all inputs and states but are computationally infeasible for complex designs due to the state explosion problem. Dynamic verification, while not exhaustive, is more scalable and the most widely used method. Hybrid methods combine simulations with static analysis to balance scalability and rigorous proofs, such as using simulations of real behaviour as the starting point for proofs rather than all possible behaviour (some of which may not be realisable).</p> </div> </section> <section class="ltx_subsection" id="S3.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.2 </span>Coverage Models and Closure</h3> <div class="ltx_para" id="S3.SS2.p1"> <p class="ltx_p" id="S3.SS2.p1.1">Dynamic verification methods cannot exhaustively verify complex designs, especially within time-constrained commercial projects. Instead, verification teams use coverage models to focus efforts on design elements of interest. A coverage model defines the scope of a verification task, and it is used to measure progress. The dynamic verification process is considered complete when all elements in the coverage models are tested (covered) and the correct behaviour observed <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib83" title="">83</a>]</cite>, a milestone known as coverage closure. Most ML-enhanced verification techniques reviewed use coverage models both in the learning process and for performance evaluation. Examples are discussed in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS1" title="7.1 Coverage Models ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">7.1</span></a></p> </div> <div class="ltx_para" id="S3.SS2.p2"> <p class="ltx_p" id="S3.SS2.p2.1">Coverage models are divided into structural and functional types. <span class="ltx_text ltx_font_bold" id="S3.SS2.p2.1.1">Structural models</span> are based on the design description and examples include statement, conditional, branch, toggle, and state machine coverage. These models are generated automatically and are used to track how thoroughly the design has been executed during testing. By comparison, <span class="ltx_text ltx_font_bold" id="S3.SS2.p2.1.2">Functional models</span> derive from the DUV’s specification and track whether the design is functionally correct. Definitions and examples of functional and structural coverage models can be found in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib80" title="">80</a>]</cite>.</p> </div> <div class="ltx_para" id="S3.SS2.p3"> <p class="ltx_p" id="S3.SS2.p3.1">Functional coverage models are usually created manually by the verification team. A verification plan, derived from a DUV’s specification, identifies features and associates them with one or more coverage models. A typical project may have hundreds of coverage models, with some overlapping. Therefore, a feature and its associated states can appear in multiple coverage models. Consequently, a sequence of inputs to a DUV can cover multiple states in a coverage model and many models. One challenge for ML-enhanced verification techniques is to operate with a range of coverage models, both structural and functional.</p> </div> <div class="ltx_para" id="S3.SS2.p4"> <p class="ltx_p" id="S3.SS2.p4.1">There are different types of functional models commonly seen in research using ML for electronic-design verification:</p> </div> <div class="ltx_para" id="S3.SS2.p5"> <ul class="ltx_itemize" id="S3.I1"> <li class="ltx_item" id="S3.I1.ix1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S3.I1.ix1.p1"> <p class="ltx_p" id="S3.I1.ix1.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I1.ix1.p1.1.1">Cross-product coverage models</span>: These are named groups of states in the DUV’s state space. They define cover points, which are specific points in the design to monitor, such as the values of signals or variables. A coverage cross is the cross product of two or more cover points, and a cross-product coverage model is a collection of these crosses <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib68" title="">68</a>]</cite>. A simplified version defines cover points in isolation, without relating them to other signals or variables.</p> </div> </li> <li class="ltx_item" id="S3.I1.ix2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S3.I1.ix2.p1"> <p class="ltx_p" id="S3.I1.ix2.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I1.ix2.p1.1.1">Assertion-based models</span>: An assertion expresses a property of the design, such as a safety property (something that should never happen) or a liveness property (something that should eventually happen). The purpose of an assertion model is to report the occurrence of an expected event <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib80" title="">80</a>]</cite>. Assertions are broadly divided into those defined only over primary input signals and those defined over input, internal, and output signals <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib64" title="">64</a>]</cite>. An advantage of assertion models is their suitability for static-based techniques, making them advantageous in projects that use both formal and test-based methods. However, this review found they are rarely used with machine learning for dynamic verification, potentially due to their association with static-based techniques. They are used in hybrid methods, such as Goldmine <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib67" title="">67</a>]</cite>, an ML-based technique that uses simulation traces and formal methods to create assertions automatically.</p> </div> </li> </ul> </div> <div class="ltx_para" id="S3.SS2.p6"> <p class="ltx_p" id="S3.SS2.p6.1">Some applications may define alternative functional coverage models. For example, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib83" title="">83</a>]</cite> applies ML to verify a design at a system level, and the research uses <span class="ltx_text ltx_font_bold" id="S3.SS2.p6.1.1">Modular coverage</span>, to record when a specific block (module) is activated.</p> </div> <div class="ltx_para" id="S3.SS2.p7"> <p class="ltx_p" id="S3.SS2.p7.1">It is common to refer to the coverage of a test. In the case of functional coverage, it measures how well a test covers part of the functional specification. In the case of structural coverage, it measures how well the test covers the implementation of the specification <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib92" title="">92</a>]</cite>. The coverage of a test can be viewed as the percentage of the coverage model a test covers.</p> </div> <div class="ltx_para" id="S3.SS2.p8"> <p class="ltx_p" id="S3.SS2.p8.1">Structural and functional coverage models have limitations. Structural coverage models are easy to create but only reveal how much of the design has been tested, not whether its behaviour is correct. Conversely, functional coverage models track how much of the specified behaviour has been tested but do not measure the quality and completeness of the verification environment <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib108" title="">108</a>]</cite>. Functional models are usually created manually, which introduces the possibility of human error and limits the scope to the behaviour defined by the verification team. Therefore, achieving coverage closure with both structural and functional models does not guarantee a bug-free design.</p> </div> <div class="ltx_para" id="S3.SS2.p9"> <p class="ltx_p" id="S3.SS2.p9.1"><span class="ltx_text ltx_font_bold" id="S3.SS2.p9.1.1">Coverage closure</span> aims to test all reachable states within a coverage model, but <span class="ltx_text ltx_font_bold" id="S3.SS2.p9.1.2">quality of coverage</span> is also important. Each point in a coverage model should be accessed multiple times through different trajectories originating from previous states, and the frequency of visits to each point should be evenly distributed. While most machine learning applications reviewed have tackled the issue of coverage closure, few studies address the requirements for multiplicity and distribution. Examples that do include <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib31" title="">31</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib30" title="">30</a>]</cite>.</p> </div> </section> <section class="ltx_subsection" id="S3.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.3 </span>Testing in Dynamic-Based Verification</h3> <div class="ltx_para" id="S3.SS3.p1"> <p class="ltx_p" id="S3.SS3.p1.1">Testing is one technique in the suite of verification methods, and it is central to dynamic-based verification. In testing, inputs are applied to the Design Under Verification (DUV), and its responses are recorded. Typically, a test bench is used, which includes a test generator, a simulator, the DUV, an output recorder, and a golden reference model (ground truth) to check the correctness of the DUV’s output.</p> </div> <div class="ltx_para" id="S3.SS3.p2"> <p class="ltx_p" id="S3.SS3.p2.1">The primary goal of testing is to identify bugs in the design, prioritising high-priority bugs that relate to fundamental errors. Tracing the root cause of a test failure can be complex and time-consuming. Although not the focus of this review, machine learning has been used to aid debugging <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib96" title="">96</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib87" title="">87</a>]</cite>. Additionally, test failures can occur due to errors in the verification environment rather than the design itself, and machine learning techniques have been employed to predict which is the source of these failures <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib102" title="">102</a>]</cite>.</p> </div> <div class="ltx_para" id="S3.SS3.p3"> <p class="ltx_p" id="S3.SS3.p3.1">Dynamic-based testing is often divided into a directed and volume stages <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib43" title="">43</a>]</cite>. The directed stage focuses on establishing basic functionality and targeting expected bugs. This is followed by the volume stage, which uses automatically generated tests to uncover bugs arising from rare conditions that are difficult to predict. The volume stage occupies most of the simulation time, although not necessarily human resource, and is the primary focus of machine learning approaches.</p> </div> <div class="ltx_para" id="S3.SS3.p4"> <p class="ltx_p" id="S3.SS3.p4.1">A third stage, regression testing, involves periodically running a set of tests to verify the current state of the design. Often part of a continuous integration/continuous development workflow, regression testing repeats previously completed tests to ensure that design changes have not introduced new errors <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib62" title="">62</a>]</cite>. The challenge for regression testing is to select the smallest number of tests that can effectively expose any new errors. Examples of machine learning applications addressing this challenge are discussed in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S10" title="10 The Use of Machine Learning For Test Set Optimisation ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">10</span></a>.</p> </div> <div class="ltx_para" id="S3.SS3.p5"> <p class="ltx_p" id="S3.SS3.p5.1">In addition to the different stages of testing, there are various methodologies for creating the stimuli needed to drive the Design Under Verification (DUV). Three traditionally used approaches are expert-written tests, pseudo-random tests, and coverage-directed tests <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib47" title="">47</a>]</cite>. Writing effective tests by hand requires expert knowledge and time. Therefore, the micro-electronic verification industry conducts volume testing using test generators to automatically create stimuli for the DUV. These generators are not purely random. Instead, they incorporate domain knowledge to generate stimuli that are more likely to find errors in the DUV’s design. This knowledge is traditionally encoded by experts, although research has used ML to extract this knowledge automatically <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib60" title="">60</a>]</cite>. The verification team can then parametrise these generators to target specific behaviours. When the parameterisation is constraints, the process is known as <span class="ltx_text ltx_font_bold" id="S3.SS3.p5.1.1">Constrained-Random test Generation</span> (CRG) or Constrained Random Testing (CRT) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib79" title="">79</a>]</cite>.</p> </div> <div class="ltx_para" id="S3.SS3.p6"> <p class="ltx_p" id="S3.SS3.p6.1">A central challenge in dynamic-based verification is the (sometimes) complex relationship between the inputs a DUV receives, the states it enters, and the outputs it produces. Each time a test is simulated on a device, information is gained about this relationship that can be used to guide future testing. <span class="ltx_text ltx_font_bold" id="S3.SS3.p6.1.1">Coverage-Directed test Generation</span> (CDG) uses constrained test generators where constraints are set based on the coverage of previous tests. These constraints can be set by experts or machine learning algorithms and are updated throughout verification to target different functionalities.</p> </div> <div class="ltx_para" id="S3.SS3.p7"> <p class="ltx_p" id="S3.SS3.p7.1">Even with a single set of constraints, the output of a constrained random test generator (and the behaviour of the DUV) can be varied by changing the random seed and initial state. Industrial test generators can have over 1000 constraints, making their configuration non-trivial. Machine learning can be used to parametrize constrained test generators (Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.SS5" title="7.5 Test Direction ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">7.5</span></a>), therefore it is important to realise the potentially large feature space and the need to identify the relevant features (parameters) to control.</p> </div> <div class="ltx_para" id="S3.SS3.p8"> <p class="ltx_p" id="S3.SS3.p8.1">Coverage-directed generation is a mature industry-standard approach, well-defined in the SystemVerilog language <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib2" title="">2</a>]</cite> and Universal Verification Methodology <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib1" title="">1</a>]</cite>, used by approximately 70% of real-world projects <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib35" title="">35</a>]</cite>. Its advantages include the ability to generate tests for devices with many inputs, cover functionality in a balanced way, and quickly create many test cases <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib83" title="">83</a>]</cite>. However, it is inherently computationally inefficient due to its reliance on pseudo-random generation. The effectiveness of a parameterisation to increase coverage decreases over time <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib45" title="">45</a>]</cite>, and the approach can be ineffective for hitting specific coverage points (e.g., coverage holes) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib69" title="">69</a>]</cite>. Compared to expert-written tests, Coverage-directed generated tests are often longer, less targeted, and use more simulation resources to achieve the same result <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib89" title="">89</a>]</cite>. One topic of research is to use machine learning to increase the efficiency of CDG and enable tighter control over its output.</p> </div> <div class="ltx_para" id="S3.SS3.p9"> <p class="ltx_p" id="S3.SS3.p9.1"><span class="ltx_text ltx_font_bold" id="S3.SS3.p9.1.1">Coverage-Directed Test Selection</span> is a variant of CDG where pre-existing tests are selected based on their potential to increase coverage. This approach is especially beneficial when tests are computationally cheap to generate but expensive to simulate, and it is a focus of ML research <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib69" title="">69</a>]</cite>.</p> </div> </section> <section class="ltx_subsection" id="S3.SS4"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.4 </span>The Verification Environment</h3> <div class="ltx_para" id="S3.SS4.p1"> <p class="ltx_p" id="S3.SS4.p1.1">The typical dynamic-verification environment makes use of a testbench as shown in Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S3.F3" title="Figure 3 ‣ 3.4 The Verification Environment ‣ 3 Background ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">3</span></a>. The stimuli source can be either expert-written instruction sequences or those generated by a constrained-random test generator. These stimuli are translated into inputs compatible with the Design Under Verification (DUV), which is then simulated, and its response is monitored. A reference model, or golden model, checks if the response aligns with the design specifications. Most research using machine learning methods interface with a variant of this environment, discussed in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7" title="7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">7</span></a>.</p> </div> <figure class="ltx_figure" id="S3.F3"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="302" id="S3.F3.g1" src="x3.png" width="747"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S3.F3.2.1.1" style="font-size:90%;">Figure 3</span>: </span><span class="ltx_text" id="S3.F3.3.2" style="font-size:90%;">A conventional test bench used in the test-based verification of microelectronic designs. The test bench is configured for Coverage-Directed Generation using a parameterised stimuli generator and where human expertise (not machine learning) is used to control the generation of stimuli to the Design Under Verification (DUV).</span></figcaption> </figure> <div class="ltx_para" id="S3.SS4.p2"> <p class="ltx_p" id="S3.SS4.p2.1">Dynamic-based verification also uses a repository to store information necessary for replicating tests and results from previous runs. These repositories typically contain large amounts of labelled data, from which machine learning techniques can be trained to, for instance, select tests to rerun after a design change or predict whether a new test will verify a specific DUV behaviour.</p> </div> <div class="ltx_para" id="S3.SS4.p3"> <p class="ltx_p" id="S3.SS4.p3.1">Finally, a single instantiated test or set of constraints may reveal multiple instances where the DUV’s input does not produce the expected output. These errors could be due to a mistake (bug) in the DUV design or an issue in the verification environment. The test-based process described here is part of a workflow where test outcomes are analysed to identify, diagnose, and correct errors in both the DUV design and the test bench.</p> </div> <div class="ltx_para" id="S3.SS4.p4"> <p class="ltx_p" id="S3.SS4.p4.1">For machine learning practitioners, it is crucial to establish what a test constitutes in this environment, as a test description can be part of the training data, model output, or both. Depending on the author, the term “test” can refer to a single input, a sequence of inputs, a complete program, or a parameterisation including constraints and random seeds. A test may also involve the configuration of the DUV <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib115" title="">115</a>]</cite>, and a transaction can be expressed at different levels of abstraction, from a bit-pattern to a high-level instruction. To avoid confusion, we define the following terms:</p> <ul class="ltx_itemize" id="S3.I2"> <li class="ltx_item" id="S3.I2.ix1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S3.I2.ix1.p1"> <p class="ltx_p" id="S3.I2.ix1.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I2.ix1.p1.1.1">Test-template</span>: The parameterisation that biases a test generator, including the random seed, constraints, and any additional information needed to generate output.</p> </div> </li> <li class="ltx_item" id="S3.I2.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I2.i1.p1"> <p class="ltx_p" id="S3.I2.i1.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I2.i1.p1.1.1">Instantiated-test</span>: A sequence of inputs created by a test generator to be applied to a DUV.</p> </div> </li> <li class="ltx_item" id="S3.I2.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I2.i2.p1"> <p class="ltx_p" id="S3.I2.i2.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I2.i2.p1.1.1">Constraints</span>: The parameterisation applied to a constrained random test generator.</p> </div> </li> <li class="ltx_item" id="S3.I2.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I2.i3.p1"> <p class="ltx_p" id="S3.I2.i3.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I2.i3.p1.1.1">Directed Test</span>: A test program written by an expert, denoting a sequence of inputs to a DUV.</p> </div> </li> <li class="ltx_item" id="S3.I2.i4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I2.i4.p1"> <p class="ltx_p" id="S3.I2.i4.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I2.i4.p1.1.1">Transaction</span>: An instruction or command expressed at a high level of abstraction.</p> </div> </li> <li class="ltx_item" id="S3.I2.i5" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I2.i5.p1"> <p class="ltx_p" id="S3.I2.i5.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I2.i5.p1.1.1">Stimuli</span>: A low-level, bit-pattern, input to the DUV.</p> </div> </li> </ul> </div> </section> <section class="ltx_subsection" id="S3.SS5"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.5 </span>The Challenge of Coverage-Directed Verification</h3> <div class="ltx_para" id="S3.SS5.p1"> <p class="ltx_p" id="S3.SS5.p1.1">The primary challenge for test-based, dynamic verification is to find all bugs in a design using the least amount of human and computational resources. Ultimately, this is what most research using machine learning for functional verification aims to achieve (Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S5.SS2" title="5.2 Benefits of Using Machine Learning in Microelectronic Design Verification ‣ 5 Use Cases, Benefits and Desirable Qualities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">5.2</span></a>).</p> </div> <div class="ltx_para" id="S3.SS5.p2"> <p class="ltx_p" id="S3.SS5.p2.1">In coverage-directed verification, progress is often tracked by the cumulative percentage of coverage points hit vs the number of simulations performed. The goal is to shift the curve to the left, achieving higher coverage in fewer simulation cycles. Alternatively, a more granular view of coverage is to associate each “test” with the coverage points it hits. We found examples of machine learning techniques using each view of coverage as part of reward, fitness or cost functions, or as labels for supervised techniques.</p> </div> <div class="ltx_para" id="S3.SS5.p3"> <p class="ltx_p" id="S3.SS5.p3.1">An alternative view based on the number of points covered per test cycle is proposed in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib30" title="">30</a>]</cite>. This view reveals waves where each peak is the covering of a new area of functionality. Different test scenarios can be fingerprinted by these waves. There were no examples found by this review of research into alternative views of coverage and their impact on learning, suggesting it is an under explored area.</p> </div> <div class="ltx_para" id="S3.SS5.p4"> <p class="ltx_p" id="S3.SS5.p4.1">Hitting the last 10 percent of coverage points is often more difficult because these represent rare corner cases. Some research concentrates specifically on hitting the remaining coverage-holes after a high percentage of coverage has been achieved <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib70" title="">70</a>]</cite>. Authors refer to the redundancy rate as the proportion of instantiated-test inputs that do not increase coverage <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib45" title="">45</a>]</cite>. The redundancy rate usually increases as verification progresses, indicating that the efficiency of computational resources decreases when hitting the hard-to-reach coverage points.</p> </div> </section> </section> <section class="ltx_section" id="S4" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">4 </span>The Distribution of Research by Topic</h2> <div class="ltx_para" id="S4.p1"> <p class="ltx_p" id="S4.p1.1">The methodology outlined in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S2" title="2 Paper Collection and Methodology ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">2</span></a> produced a sample of the literature. In this section, we analyse this sample and make observations related to quantitative measures of the material to highlight trends and gaps. The earliest work found that applied machine learning to EDA verification was the use of evolutionary algorithms <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib93" title="">93</a>]</cite> in 1997. From 2001 to 2020, a steady interest in the topic is seen, and evolutionary algorithms are the most frequently used technique. In 2018, a shift occurred where research switched to using supervised techniques. Despite the work in 2007, it was not until 2020 that the use of reinforcement learning (RL) was seen. A step change is seen in 2021 where the number of papers is more than double that seen in any previous year, and this increased interest has been sustained to 2024 (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S4.F4" title="Figure 4 ‣ 4 The Distribution of Research by Topic ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">4</span></a>).</p> </div> <figure class="ltx_figure" id="S4.F4"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="327" id="S4.F4.g1" src="x4.png" width="664"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S4.F4.2.1.1" style="font-size:90%;">Figure 4</span>: </span><span class="ltx_text" id="S4.F4.3.2" style="font-size:90%;">Number of papers by year and machine learning type.</span></figcaption> </figure> <div class="ltx_para" id="S4.p2"> <p class="ltx_p" id="S4.p2.1">In the work surveyed, the authors did not propose machine learning techniques specifically for EDA verification. Instead, adaptations of techniques developed in other fields were used. Therefore, these trends reflect interest in and use of machine learning more broadly. Reinforcement learning and unsupervised techniques are potentially under-represented in the sampled research. However, the wide availability of labelled data and the extra expertise to set up RL explains why supervised techniques are prevalent in recent research efforts.</p> </div> <figure class="ltx_figure" id="S4.F5"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="309" id="S4.F5.g1" src="x5.png" width="498"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S4.F5.2.1.1" style="font-size:90%;">Figure 5</span>: </span><span class="ltx_text" id="S4.F5.3.2" style="font-size:90%;">Verification activities using machine learning within the sampled research material for the functional verification of digital designs using dynamic-based methods.</span></figcaption> </figure> <div class="ltx_para" id="S4.p3"> <p class="ltx_p" id="S4.p3.1">In the sampled literature, we found examples of four real-world dynamic-verification activities supported by machine learning techniques. These were bug hunting, coverage closure, test set optimisation and fault detection (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S4.F5" title="Figure 5 ‣ 4 The Distribution of Research by Topic ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">5</span></a>). In bug hunting, a verification engineer seeks to predict or uncover new bugs based on prior experience of where these bugs may occur. Coverage closure also uncovers bugs, but its aim is different. Coverage closure measures verification progress against pre-defined metrics. With respect to the terminology used in software testing <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib3" title="">3</a>]</cite>, bug hunting can be viewed as similar to experience-based testing and coverage closure as requirements-based testing. Fault detection aims to create inputs to a design that will trigger bugs. Unlike coverage closure and bug hunting, the bugs in fault detection are pre-defined and the inputs are primarily intended for later use. For example, to test post-silicone designs or field testing. Coverage closure, but also bug hunting and fault detection can create a large number of tests. Test set optimisation is the activity of testing the same design behaviours but with less simulations. Test set optimisation is synonymous with regression testing, an industry practice where previously completed tests are re-run to verify design changes.</p> </div> <figure class="ltx_figure" id="S4.F6"> <div class="ltx_flex_figure"> <div class="ltx_flex_cell ltx_flex_size_2"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S4.F6.sf1"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="491" id="S4.F6.sf1.g1" src="x6.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S4.F6.sf1.3.1.1" style="font-size:90%;">(a)</span> </span><span class="ltx_text" id="S4.F6.sf1.4.2" style="font-size:90%;">Proportion of papers by activity within verification. <span class="ltx_text ltx_phantom" id="S4.F6.sf1.4.2.1"><span style="visibility:hidden">xxxxx</span></span></span></figcaption> </figure> </div> <div class="ltx_flex_cell ltx_flex_size_2"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S4.F6.sf2"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="491" id="S4.F6.sf2.g1" src="x7.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S4.F6.sf2.2.1.1" style="font-size:90%;">(b)</span> </span><span class="ltx_text" id="S4.F6.sf2.3.2" style="font-size:90%;">Proportion of techniques by coverage closure activity.</span></figcaption> </figure> </div> </div> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S4.F6.2.1.1" style="font-size:90%;">Figure 6</span>: </span><span class="ltx_text" id="S4.F6.3.2" style="font-size:90%;">Left: Proportion of papers by verification activity. Right: Proportion of papers by coverage closure technique.</span></figcaption> </figure> <div class="ltx_para" id="S4.p4"> <p class="ltx_p" id="S4.p4.1">Of the four activities, the majority of papers apply machine learning to coverage closure (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S4.F6.sf1" title="In Figure 6 ‣ 4 The Distribution of Research by Topic ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">6(a)</span></a>). Achieving closure is a significant bottleneck in the electronic design process <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib4" title="">4</a>]</cite>, and the problem of coverage closure can also be framed as a mapping from input to output space for a black box function. A framing compatible with a wide range of machine learning techniques. Therefore, it is unsurprising that coverage closure has occupied a significant proportion of the research interest.</p> </div> <div class="ltx_para" id="S4.p5"> <p class="ltx_p" id="S4.p5.1">In the research material, the use of machine learning in coverage closure was predominately an even split between test direction where the ML model parameterises a (usually) constrained-random test generator, test selection where the machine-learning selects stimuli from a pre-generated set and test generation where the machine learning generates the stimuli directly (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S4.F6.sf2" title="In Figure 6 ‣ 4 The Distribution of Research by Topic ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">6(b)</span></a>). The amount of material for Test Generation relative to Direction and Selection is surprising. Constrained-random test generators are widely used in industry which facilitates the incorporation of Test Direction based techniques into existing verification environments and workflows. Test Selection is also commonly used to create test sets for regression (periodic testing) and is often widely compatible with different workflows. However, Test Generation requires domain knowledge to generate legal inputs which is potentially more challenging than Direction and Selection, and it is also potentially more difficult to integrate into an existing verification environment.</p> </div> <div class="ltx_para" id="S4.p6"> <p class="ltx_p" id="S4.p6.1">Only a small number of coverage analysis and collection related work were found in the sampled literature. The low representation of these topics may be due to unintentional bias in the sampling methodology. However, both activities are associated with large amounts of data in big design projects, something present in industry but more challenging to replicate in an academic research context. This may explain the lack of academic research material in these areas.</p> </div> </section> <section class="ltx_section" id="S5" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">5 </span>Use Cases, Benefits and Desirable Qualities</h2> <div class="ltx_para" id="S5.p1"> <p class="ltx_p" id="S5.p1.1">Using machine learning in verification is applied research with real-world benefits to the electronic design industry. Progress relies on understanding where machine learning can be applied, what the measures of success are, and how it benefits the verification process. Research and industry have expressed these as high-level summaries. However, we found the research to be more granular. Authors used ML to address specific use cases and measured success against application-specific criteria. This section uses the sampled literature to collate these use cases and criteria as a platform for future work. The aim is to provide a qualitative summary of where machine learning is used, what benefits the research aimed to bring, and what the research community views as success in the context of machine learning for dynamic-based functional verification of electronic designs. We address quantitative (metrics) of success in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11" title="11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">11</span></a>.</p> </div> <section class="ltx_subsection" id="S5.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">5.1 </span>Applications for ML in Simulation-Based Testing</h3> <div class="ltx_para" id="S5.SS1.p1"> <p class="ltx_p" id="S5.SS1.p1.1">In this context, an application describes a scenario in which machine learning can be used during the verification of microelectronic devices. It focuses on <em class="ltx_emph ltx_font_italic" id="S5.SS1.p1.1.1">what</em> the practitioner aims to achieve rather than <em class="ltx_emph ltx_font_italic" id="S5.SS1.p1.1.2">how</em> the machine learning can be applied. The taxonomy in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S4" title="4 The Distribution of Research by Topic ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">4</span></a> (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S4.F5" title="Figure 5 ‣ 4 The Distribution of Research by Topic ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">5</span></a>) is based on industry processes and a starting point for practitioners to identify relevant ML research to improve a particular aspect of verification. While this taxonomy is a quick way to access the literature, there is a range of applications for machine learning within a group such as test generation or selection.</p> </div> <div class="ltx_para" id="S5.SS1.p2"> <p class="ltx_p" id="S5.SS1.p2.1">Here, we examine the applications found in the sampled research, emphasising details likely to affect the machine learning solution, including whether inputs are sequential, how machine learning is integrated into a verification process, and what ML is used to predict. The applications in this section have been synthesised from the sampled literature, and similar applications have been combined only where the loss of detail is unlikely to affect the application of ML. Conversely, applications have been kept distinct where specific details are likely to affect the machine learning solution.</p> </div> <figure class="ltx_table" id="S5.T1"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S5.T1.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S5.T1.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.1.1.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.1.1.1.1"> <span class="ltx_p" id="S5.T1.2.1.1.1.1.1" style="width:199.2pt;"><span class="ltx_text ltx_font_bold" id="S5.T1.2.1.1.1.1.1.1">Application</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S5.T1.2.1.1.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.1.1.2.1"> <span class="ltx_p" id="S5.T1.2.1.1.2.1.1" style="width:42.7pt;"><span class="ltx_text ltx_font_bold" id="S5.T1.2.1.1.2.1.1.1">#Papers</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S5.T1.2.1.1.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.1.1.3.1"> <span class="ltx_p" id="S5.T1.2.1.1.3.1.1" style="width:113.8pt;"><span class="ltx_text ltx_font_bold" id="S5.T1.2.1.1.3.1.1.1">References</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S5.T1.2.2.1"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.2.1.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.2.1.1.1"> <span class="ltx_p" id="S5.T1.2.2.1.1.1.1" style="width:199.2pt;">Generate inputs to maximise coverage</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.2.1.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.2.1.2.1"> <span class="ltx_p" id="S5.T1.2.2.1.2.1.1" style="width:42.7pt;">20</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.2.1.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.2.1.3.1"> <span class="ltx_p" id="S5.T1.2.2.1.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib78" title="">78</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib28" title="">28</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib49" title="">49</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib98" title="">98</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib92" title="">92</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib25" title="">25</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib61" title="">61</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib107" title="">107</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib22" title="">22</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib12" title="">12</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib93" title="">93</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib38" title="">38</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib72" title="">72</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib75" title="">75</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib27" title="">27</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib103" title="">103</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib88" title="">88</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib86" title="">86</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib19" title="">19</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib82" title="">82</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T1.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.3.2.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.3.2.1.1"> <span class="ltx_p" id="S5.T1.2.3.2.1.1.1" style="width:199.2pt;">Predict input to hit an output</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.3.2.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.3.2.2.1"> <span class="ltx_p" id="S5.T1.2.3.2.2.1.1" style="width:42.7pt;">7</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.3.2.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.3.2.3.1"> <span class="ltx_p" id="S5.T1.2.3.2.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib7" title="">7</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib33" title="">33</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib105" title="">105</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib13" title="">13</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib4" title="">4</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib21" title="">21</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib6" title="">6</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T1.2.4.3"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.4.3.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.4.3.1.1"> <span class="ltx_p" id="S5.T1.2.4.3.1.1.1" style="width:199.2pt;">Predict output from an input</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.4.3.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.4.3.2.1"> <span class="ltx_p" id="S5.T1.2.4.3.2.1.1" style="width:42.7pt;">9</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.4.3.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.4.3.3.1"> <span class="ltx_p" id="S5.T1.2.4.3.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib55" title="">55</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib43" title="">43</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib41" title="">41</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib70" title="">70</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib45" title="">45</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib69" title="">69</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib18" title="">18</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib104" title="">104</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib37" title="">37</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T1.2.5.4"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.5.4.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.5.4.1.1"> <span class="ltx_p" id="S5.T1.2.5.4.1.1.1" style="width:199.2pt;">Measure similarity/novelty</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.5.4.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.5.4.2.1"> <span class="ltx_p" id="S5.T1.2.5.4.2.1.1" style="width:42.7pt;">5</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.5.4.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.5.4.3.1"> <span class="ltx_p" id="S5.T1.2.5.4.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib66" title="">66</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib115" title="">115</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib17" title="">17</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib116" title="">116</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib47" title="">47</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T1.2.6.5"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.6.5.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.6.5.1.1"> <span class="ltx_p" id="S5.T1.2.6.5.1.1.1" style="width:199.2pt;">Improve the quality of coverage</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.6.5.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.6.5.2.1"> <span class="ltx_p" id="S5.T1.2.6.5.2.1.1" style="width:42.7pt;">2</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.6.5.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.6.5.3.1"> <span class="ltx_p" id="S5.T1.2.6.5.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib83" title="">83</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib39" title="">39</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T1.2.7.6"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.7.6.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.7.6.1.1"> <span class="ltx_p" id="S5.T1.2.7.6.1.1.1" style="width:199.2pt;">Frequently hit the same coverage point / event</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.7.6.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.7.6.2.1"> <span class="ltx_p" id="S5.T1.2.7.6.2.1.1" style="width:42.7pt;">1</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.7.6.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.7.6.3.1"> <span class="ltx_p" id="S5.T1.2.7.6.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib73" title="">73</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T1.2.8.7"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.8.7.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.8.7.1.1"> <span class="ltx_p" id="S5.T1.2.8.7.1.1.1" style="width:199.2pt;">Improve the effectiveness of existing methods</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.8.7.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.8.7.2.1"> <span class="ltx_p" id="S5.T1.2.8.7.2.1.1" style="width:42.7pt;">4</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.8.7.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.8.7.3.1"> <span class="ltx_p" id="S5.T1.2.8.7.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib95" title="">95</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib14" title="">14</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib60" title="">60</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib52" title="">52</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib38" title="">38</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T1.2.9.8"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.9.8.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.9.8.1.1"> <span class="ltx_p" id="S5.T1.2.9.8.1.1.1" style="width:199.2pt;">Improve the efficiency of existing closure methods</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.9.8.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.9.8.2.1"> <span class="ltx_p" id="S5.T1.2.9.8.2.1.1" style="width:42.7pt;">6</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.9.8.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.9.8.3.1"> <span class="ltx_p" id="S5.T1.2.9.8.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib68" title="">68</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib40" title="">40</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib32" title="">32</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib84" title="">84</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib62" title="">62</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib79" title="">79</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T1.2.10.9"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.10.9.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.10.9.1.1"> <span class="ltx_p" id="S5.T1.2.10.9.1.1.1" style="width:199.2pt;">Improve the efficiency of regression testing</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.10.9.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.10.9.2.1"> <span class="ltx_p" id="S5.T1.2.10.9.2.1.1" style="width:42.7pt;">5</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.10.9.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.10.9.3.1"> <span class="ltx_p" id="S5.T1.2.10.9.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib44" title="">44</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib76" title="">76</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib113" title="">113</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib53" title="">53</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib57" title="">57</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T1.2.11.10"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.11.10.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.11.10.1.1"> <span class="ltx_p" id="S5.T1.2.11.10.1.1.1" style="width:199.2pt;">Generate tests to be reused at a different levels of abstractions</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.11.10.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.11.10.2.1"> <span class="ltx_p" id="S5.T1.2.11.10.2.1.1" style="width:42.7pt;">2</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.11.10.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.11.10.3.1"> <span class="ltx_p" id="S5.T1.2.11.10.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib111" title="">111</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib48" title="">48</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T1.2.12.11"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.12.11.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.12.11.1.1"> <span class="ltx_p" id="S5.T1.2.12.11.1.1.1" style="width:199.2pt;">Expose a known bug</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.12.11.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.12.11.2.1"> <span class="ltx_p" id="S5.T1.2.12.11.2.1.1" style="width:42.7pt;">4</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T1.2.12.11.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.12.11.3.1"> <span class="ltx_p" id="S5.T1.2.12.11.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib97" title="">97</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib11" title="">11</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib9" title="">9</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib10" title="">10</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T1.2.13.12"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S5.T1.2.13.12.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.13.12.1.1"> <span class="ltx_p" id="S5.T1.2.13.12.1.1.1" style="width:199.2pt;">Find new bugs</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t" id="S5.T1.2.13.12.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.13.12.2.1"> <span class="ltx_p" id="S5.T1.2.13.12.2.1.1" style="width:42.7pt;">4</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t" id="S5.T1.2.13.12.3"> <span class="ltx_inline-block ltx_align_top" id="S5.T1.2.13.12.3.1"> <span class="ltx_p" id="S5.T1.2.13.12.3.1.1" style="width:113.8pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib89" title="">89</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib46" title="">46</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib94" title="">94</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib101" title="">101</a>]</cite></span> </span> </td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S5.T1.3.1.1" style="font-size:90%;">Table 1</span>: </span><span class="ltx_text" id="S5.T1.4.2" style="font-size:90%;">The applications of machine learning in simulation-based verification of microelectronic devices.</span></figcaption> </figure> <div class="ltx_para" id="S5.SS1.p3"> <p class="ltx_p" id="S5.SS1.p3.1">Generating inputs to maximise coverage often used reinforcement learning or evolutionary algorithms to create constraints and instruction sequences aimed at increasing coverage. Alternatively, some research uses machine learning to predict test inputs rather than generating them directly. Predicting an input to hit an output is associated with targeting known coverage holes, while predicting an output from an input approaches the problem in reverse, predicting the coverage point hit given a known input.</p> </div> <div class="ltx_para" id="S5.SS1.p4"> <p class="ltx_p" id="S5.SS1.p4.1">Machine learning was also used to measure the similarity or novelty between sets of tests. This was common in techniques that identified transaction sequences to simulate from a pre-generated set without coverage information.</p> </div> <div class="ltx_para" id="S5.SS1.p5"> <p class="ltx_p" id="S5.SS1.p5.1">Some applications aimed to improve the quality of coverage rather than just the total percentage of coverage points hit. For example, improving coverage evenness by selecting instruction sequences to target infrequently-hit coverage points, and other techniques enhance coverage quality by selecting tests to ensure coverage points are hit from different prior states of the Device Under Verification (DUV).</p> </div> <div class="ltx_para" id="S5.SS1.p6"> <p class="ltx_p" id="S5.SS1.p6.1">Although applications of machine learning often result in fewer simulation cycles, some are distinguished by not being standalone methods but instead improving the efficiency of existing methods. Examples include using machine learning to group highly correlated coverage holes and predicting whether an initial state of a device will increase the probability of generating a successful test.</p> </div> <div class="ltx_para" id="S5.SS1.p7"> <p class="ltx_p" id="S5.SS1.p7.1">Applications that improve the efficiency of regression testing are run outside the testing loop and usually have access to information such as design changes and which tests previously detected errors. These applications reduce the number of tests that need to be simulated and some optimise against resource budgets.</p> </div> <div class="ltx_para" id="S5.SS1.p8"> <p class="ltx_p" id="S5.SS1.p8.1">Some of the applications relate to improving the effectiveness of machine learning. For instance, by proposing a communication infrastructure between a DUV and an RL agent <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib95" title="">95</a>]</cite>, automatically fine-tuning the parameters of a Bayesian Network model leading to better constraints for a test generator <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib14" title="">14</a>]</cite>, or by automatically learning and embedding domain knowledge into a test generator <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib60" title="">60</a>]</cite>.</p> </div> <div class="ltx_para" id="S5.SS1.p9"> <p class="ltx_p" id="S5.SS1.p9.1">Bug detection can be split into two types of applications. In the first type, the bug is known, and machine learning is used to find a test sequence that causes the bug to be detected. In the second type, the bug is unknown, and machine learning is used to increase the probability of testing finding bugs.</p> </div> <div class="ltx_para" id="S5.SS1.p10"> <p class="ltx_p" id="S5.SS1.p10.1">Research that used machine learning to generate tests to be reused at a different level of abstraction is similar to generative or predictive applications that increase coverage. However, the aim is not to achieve high coverage per se but to create a test set for use later in development. For instance, using behavioural simulations written in high level languages to create tests for RT-level or gate-level representations.</p> </div> <div class="ltx_para" id="S5.SS1.p11"> <p class="ltx_p" id="S5.SS1.p11.1">The applications in this section are high-level groupings. In practice, a practitioner needs to consider important details specific to their application, particularly in the input and output spaces of their machine learning application. In the input space, details to consider include whether the inputs are sequences or singular, how closely related the inputs are to the DUV behaviour (e.g., parameters for a test generator are less closely related than instructions to the DUV), whether the inputs are from simulated or unsimulated tests, and how the inputs are generated (e.g., randomly, expert-written, or from historical information). In the output space, details include whether the ML model produces a test input (such as a constraint or instruction) or makes predictions about DUV behaviour.</p> </div> </section> <section class="ltx_subsection" id="S5.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">5.2 </span>Benefits of Using Machine Learning in Microelectronic Design Verification</h3> <div class="ltx_para" id="S5.SS2.p1"> <p class="ltx_p" id="S5.SS2.p1.1">It is common practice for applied research to describe the benefits of a proposed technique. In this section, we summarise the benefits cited by research against the different machine learning applications. </p> </div> <div class="ltx_para" id="S5.SS2.p2"> <p class="ltx_p" id="S5.SS2.p2.1">We attempted to capture the views of the original authors as closely as possible. Since benefits are described differently and with a particular focus, it creates overlap. For example, where one piece of research cites a reduction in the number of simulations, another may cite hitting coverage holes faster or reducing verification time; all of which are related. We chose to keep this overlap to give a more accurate depiction of the literature. If research listed more than one benefit, then we listed each separately for the same reason. See Table <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S5.T2" title="Table 2 ‣ 5.2 Benefits of Using Machine Learning in Microelectronic Design Verification ‣ 5 Use Cases, Benefits and Desirable Qualities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">2</span></a>.</p> </div> <figure class="ltx_table" id="S5.T2"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S5.T2.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S5.T2.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.1.1.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.1.1.1.1"> <span class="ltx_p" id="S5.T2.2.1.1.1.1.1" style="width:241.8pt;"><span class="ltx_text ltx_font_bold" id="S5.T2.2.1.1.1.1.1.1">Description</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S5.T2.2.1.1.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.1.1.2.1"> <span class="ltx_p" id="S5.T2.2.1.1.2.1.1" style="width:156.5pt;"><span class="ltx_text ltx_font_bold" id="S5.T2.2.1.1.2.1.1.1">Examples</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S5.T2.2.2.1"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.2.1.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.2.1.1.1"> <span class="ltx_p" id="S5.T2.2.2.1.1.1.1" style="width:241.8pt;">Reducing the number of simulations and redundant tests</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T2.2.2.1.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.2.1.2.1"> <span class="ltx_p" id="S5.T2.2.2.1.2.1.1" style="width:156.5pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib21" title="">21</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib4" title="">4</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib55" title="">55</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib37" title="">37</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib43" title="">43</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib95" title="">95</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib49" title="">49</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib62" title="">62</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib98" title="">98</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib26" title="">26</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib32" title="">32</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib83" title="">83</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib89" title="">89</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib116" title="">116</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib45" title="">45</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib69" title="">69</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib92" title="">92</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib59" title="">59</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib66" title="">66</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T2.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.3.2.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.3.2.1.1"> <span class="ltx_p" id="S5.T2.2.3.2.1.1.1" style="width:241.8pt;">Decreasing simulation time</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T2.2.3.2.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.3.2.2.1"> <span class="ltx_p" id="S5.T2.2.3.2.2.1.1" style="width:156.5pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib23" title="">23</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T2.2.4.3"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.4.3.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.4.3.1.1"> <span class="ltx_p" id="S5.T2.2.4.3.1.1.1" style="width:241.8pt;">Reducing computational overhead for machine learning</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T2.2.4.3.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.4.3.2.1"> <span class="ltx_p" id="S5.T2.2.4.3.2.1.1" style="width:156.5pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib4" title="">4</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib43" title="">43</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib45" title="">45</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib50" title="">50</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib115" title="">115</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T2.2.5.4"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.5.4.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.5.4.1.1"> <span class="ltx_p" id="S5.T2.2.5.4.1.1.1" style="width:241.8pt;">Reducing time to reach coverage closure</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T2.2.5.4.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.5.4.2.1"> <span class="ltx_p" id="S5.T2.2.5.4.2.1.1" style="width:156.5pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib41" title="">41</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib33" title="">33</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib32" title="">32</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib45" title="">45</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib92" title="">92</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T2.2.6.5"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.6.5.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.6.5.1.1"> <span class="ltx_p" id="S5.T2.2.6.5.1.1.1" style="width:241.8pt;">Reducing verification time</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T2.2.6.5.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.6.5.2.1"> <span class="ltx_p" id="S5.T2.2.6.5.2.1.1" style="width:156.5pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib37" title="">37</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib5" title="">5</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib26" title="">26</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib28" title="">28</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T2.2.7.6"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.7.6.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.7.6.1.1"> <span class="ltx_p" id="S5.T2.2.7.6.1.1.1" style="width:241.8pt;">Hitting coverage holes faster</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T2.2.7.6.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.7.6.2.1"> <span class="ltx_p" id="S5.T2.2.7.6.2.1.1" style="width:156.5pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib68" title="">68</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib33" title="">33</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib115" title="">115</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T2.2.8.7"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.8.7.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.8.7.1.1"> <span class="ltx_p" id="S5.T2.2.8.7.1.1.1" style="width:241.8pt;">Reducing expert resources</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T2.2.8.7.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.8.7.2.1"> <span class="ltx_p" id="S5.T2.2.8.7.2.1.1" style="width:156.5pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib89" title="">89</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib105" title="">105</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib69" title="">69</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib92" title="">92</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T2.2.9.8"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.9.8.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.9.8.1.1"> <span class="ltx_p" id="S5.T2.2.9.8.1.1.1" style="width:241.8pt;">Generalising to different verification environments</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T2.2.9.8.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.9.8.2.1"> <span class="ltx_p" id="S5.T2.2.9.8.2.1.1" style="width:156.5pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib95" title="">95</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib72" title="">72</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib28" title="">28</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib45" title="">45</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib92" title="">92</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib78" title="">78</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib115" title="">115</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T2.2.10.9"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.10.9.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.10.9.1.1"> <span class="ltx_p" id="S5.T2.2.10.9.1.1.1" style="width:241.8pt;">Improving ML performance</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T2.2.10.9.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.10.9.2.1"> <span class="ltx_p" id="S5.T2.2.10.9.2.1.1" style="width:156.5pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib4" title="">4</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib78" title="">78</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T2.2.11.10"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.11.10.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.11.10.1.1"> <span class="ltx_p" id="S5.T2.2.11.10.1.1.1" style="width:241.8pt;">Using verification resources effectively</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S5.T2.2.11.10.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.11.10.2.1"> <span class="ltx_p" id="S5.T2.2.11.10.2.1.1" style="width:156.5pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib62" title="">62</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib78" title="">78</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T2.2.12.11"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S5.T2.2.12.11.1"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.12.11.1.1"> <span class="ltx_p" id="S5.T2.2.12.11.1.1.1" style="width:241.8pt;">Adding features</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t" id="S5.T2.2.12.11.2"> <span class="ltx_inline-block ltx_align_top" id="S5.T2.2.12.11.2.1"> <span class="ltx_p" id="S5.T2.2.12.11.2.1.1" style="width:156.5pt;"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib32" title="">32</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib83" title="">83</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib59" title="">59</a>]</cite></span> </span> </td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S5.T2.3.1.1" style="font-size:90%;">Table 2</span>: </span><span class="ltx_text" id="S5.T2.4.2" style="font-size:90%;">Benefits cited by machine learning applications for microelectronic device verification in dynamic-based workflows.</span></figcaption> </figure> <div class="ltx_para" id="S5.SS2.p3"> <p class="ltx_p" id="S5.SS2.p3.1">In the context of coverage closure, redundant tests are simulated but do not add to coverage. More generally, a DUV is simulated for other reasons including generating training data and understanding behaviour. Since simulating a DUV has a cost in computational and time resources, a large proportion of the machine learning applications cite their benefit as reducing the number of times a DUV is simulated. This group also includes applications that aim to find the smallest number of transactions to reach an output state <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib95" title="">95</a>]</cite>. Applications that decrease simulation time aim to reduce the resource expense of a single simulation rather than the total number <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib23" title="">23</a>]</cite>.</p> </div> <div class="ltx_para" id="S5.SS2.p4"> <p class="ltx_p" id="S5.SS2.p4.1">Machine learning methods introduce compute cost. To mitigate this cost, applications cite benefits including reducing training time, the need to retrain regularly, reuse of existing simulation data <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib50" title="">50</a>]</cite>, a low training cost relative to simulation time <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib115" title="">115</a>]</cite>, and scalable re-training as new training data is generated.</p> </div> <div class="ltx_para" id="S5.SS2.p5"> <p class="ltx_p" id="S5.SS2.p5.1">Most research on applying machine learning to coverage closure highlights the benefit of reducing the time to achieve coverage closure. This can be accomplished not only by decreasing the number of simulations but also by shortening the time needed to generate inputs and training data. Reducing verification time was created to encompass applications that report faster coverage closure without specifically mentioning fewer simulations.</p> </div> <div class="ltx_para" id="S5.SS2.p6"> <p class="ltx_p" id="S5.SS2.p6.1">Hitting coverage holes faster relates to techniques that propose to be good at covering hard-to-hit coverage points including methods that create a direct mapping from a coverage point to the input required to reach it.</p> </div> <div class="ltx_para" id="S5.SS2.p7"> <p class="ltx_p" id="S5.SS2.p7.1">Reducing expert resources includes applications that reduce the need for human written directives, domain knowledge to set up the technique, and human intervention during coverage closure.</p> </div> <div class="ltx_para" id="S5.SS2.p8"> <p class="ltx_p" id="S5.SS2.p8.1">This review finds the research lacks an emphasis on generality. However, a selection of research cites compatibility with standard UVM environments and different test generators as a benefit. Approaches that treat the DUV as a black box also cite generality to different DUV designs.</p> </div> <div class="ltx_para" id="S5.SS2.p9"> <p class="ltx_p" id="S5.SS2.p9.1">Improving machine learning performance was rarely cited as a benefit, suggesting an emphasis from research on proposing new applications rather than improving existing methods.</p> </div> <div class="ltx_para" id="S5.SS2.p10"> <p class="ltx_p" id="S5.SS2.p10.1">A small selection of the sampled material cites the benefits of a proposed technique to operate with constrained resources, such as maximising coverage subject to a time constraint or testing with constrained computing and licenses.</p> </div> <div class="ltx_para" id="S5.SS2.p11"> <p class="ltx_p" id="S5.SS2.p11.1">Finally, research also cites the benefits of adding features not necessarily present in a verification workflow. For example, increasing the diversity of inputs to a DUV is one such feature. Another is decreasing the number of cases where a pseudo-random test generator fails to generate a sequence of outputs respecting its constraints. Additionally, increasing the frequency of a single event of interest in the DUV is also cited as a benefit.</p> </div> <div class="ltx_para" id="S5.SS2.p12"> <p class="ltx_p" id="S5.SS2.p12.1">The overarching benefit of using ML for verification in the sampled literature is reducing the time spent on verification. This is motivated by the frequently cited figure of 70% of design time spent on verification. However, the time saved by an application may not be realisable in all scenarios. A device that is quick to simulate relative to the time to generate inputs would not necessarily see the time savings from methods that generate many inputs and simulate only a few. To encourage generality and the adoption of techniques, we would encourage future research to be specific about the benefits associated with proposed applications. An approach taken by some authors to aid those adopting their work is to split time into training, simulation and generation. For practitioners assessing different techniques, we recommend assessing the benefits of each ML approach in the context of their design and verification environment.</p> </div> </section> <section class="ltx_subsection" id="S5.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">5.3 </span>Qualities of a Test Bench</h3> <div class="ltx_para" id="S5.SS3.p1"> <p class="ltx_p" id="S5.SS3.p1.1">A test bench is central to a dynamic verification workflow. The motivation for using machine learning was often seen to enhance an element of a test bench, moving the state of the art closer to the “ideal”. Here, we summarise the qualities of a test bench research aims to improve.</p> </div> <figure class="ltx_table" id="S5.T3"> <figcaption class="ltx_caption"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S5.T3.2.1.1" style="font-size:90%;">Table 3</span>: </span><span class="ltx_text" id="S5.T3.3.2" style="font-size:90%;">The qualities of an ideal test bench for test-based verification and related research papers. * denotes without significant rebuilding of the verification environment.</span></figcaption> <table class="ltx_tabular" id="S5.T3.4"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S5.T3.4.1.1"> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S5.T3.4.1.1.1" style="width:65.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.1.1.1.1"> <span class="ltx_p" id="S5.T3.4.1.1.1.1.1"><span class="ltx_text ltx_font_bold" id="S5.T3.4.1.1.1.1.1.1">Grouping</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S5.T3.4.1.1.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.1.1.2.1"> <span class="ltx_p" id="S5.T3.4.1.1.2.1.1"><span class="ltx_text ltx_font_bold" id="S5.T3.4.1.1.2.1.1.1">Criteria</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S5.T3.4.2.1"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S5.T3.4.2.1.1" style="width:65.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.2.1.1.1"> <span class="ltx_p" id="S5.T3.4.2.1.1.1.1">Quality</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.2.1.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.2.1.2.1"> <span class="ltx_p" id="S5.T3.4.2.1.2.1.1">The output is deterministic and repeatable (<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib8" title="">8</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib112" title="">112</a>]</cite> as cited in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib101" title="">101</a>]</cite>).</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.3.2"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.3.2.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.3.2.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.3.2.2.1"> <span class="ltx_p" id="S5.T3.4.3.2.2.1.1">Only valid input sequences to the DUV are generated (<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib8" title="">8</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib112" title="">112</a>]</cite> as cited in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib101" title="">101</a>]</cite>), <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib32" title="">32</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.4.3"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.4.3.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.4.3.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.4.3.2.1"> <span class="ltx_p" id="S5.T3.4.4.3.2.1.1">Transactions stress the interfaces between modules where potential bugs are most likely to be found <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib101" title="">101</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.5.4"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.5.4.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.5.4.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.5.4.2.1"> <span class="ltx_p" id="S5.T3.4.5.4.2.1.1">Controls are provided for how often each task is covered using different test directives.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.6.5"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.6.5.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.6.5.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.6.5.2.1"> <span class="ltx_p" id="S5.T3.4.6.5.2.1.1">Generated tests are based on the results of previous tests and the requirements of future testing.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.7.6"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.7.6.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.7.6.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.7.6.2.1"> <span class="ltx_p" id="S5.T3.4.7.6.2.1.1">The tester is capable of exhaustively covering the necessary testing scenarios measured via a coverage metric <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib81" title="">81</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.8.7"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.8.7.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.8.7.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.8.7.2.1"> <span class="ltx_p" id="S5.T3.4.8.7.2.1.1">The tester can correctly assess whether an output is correct for a given test input <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib81" title="">81</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.9.8"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S5.T3.4.9.8.1" style="width:65.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.9.8.1.1"> <span class="ltx_p" id="S5.T3.4.9.8.1.1.1">Efficiency</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.9.8.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.9.8.2.1"> <span class="ltx_p" id="S5.T3.4.9.8.2.1.1">Interfaces seamlessly with existing simulation environment <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib101" title="">101</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.10.9"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.10.9.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.10.9.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.10.9.2.1"> <span class="ltx_p" id="S5.T3.4.10.9.2.1.1">Tests are ordered to prioritise coverage efficiency at the start of testing and achieving full coverage later in testing <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib7" title="">7</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.11.10"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.11.10.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.11.10.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.11.10.2.1"> <span class="ltx_p" id="S5.T3.4.11.10.2.1.1">Tests are selected and ordered to cover the task space efficiently <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib32" title="">32</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.12.11"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.12.11.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.12.11.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.12.11.2.1"> <span class="ltx_p" id="S5.T3.4.12.11.2.1.1">From the first test, each contributes to the verification effort.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.13.12"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.13.12.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.13.12.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.13.12.2.1"> <span class="ltx_p" id="S5.T3.4.13.12.2.1.1">The tester automatically finds which parameters (from the many in the verification environment) are needed to affect the output to hit a coverage point.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.14.13"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.14.13.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.14.13.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.14.13.2.1"> <span class="ltx_p" id="S5.T3.4.14.13.2.1.1">The number of resets required for the DUV over the course of testing is minimised <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib63" title="">63</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.15.14"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S5.T3.4.15.14.1" style="width:65.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.15.14.1.1"> <span class="ltx_p" id="S5.T3.4.15.14.1.1.1">Usability</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.15.14.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.15.14.2.1"> <span class="ltx_p" id="S5.T3.4.15.14.2.1.1">Engineers have a clear and effective way of biasing a test towards a specific coverage area (<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib8" title="">8</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib112" title="">112</a>]</cite> as cited in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib101" title="">101</a>]</cite>).</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.16.15"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.16.15.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.16.15.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.16.15.2.1"> <span class="ltx_p" id="S5.T3.4.16.15.2.1.1">Sets of similar inputs (e.g., instructions) are grouped with a short hand notation (<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib8" title="">8</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib112" title="">112</a>]</cite> as cited in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib101" title="">101</a>]</cite>)</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.17.16"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.17.16.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.17.16.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.17.16.2.1"> <span class="ltx_p" id="S5.T3.4.17.16.2.1.1">Tests can be understood in a human readable, simple, test specification language <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib77" title="">77</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib100" title="">100</a>]</cite>, (<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib8" title="">8</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib112" title="">112</a>]</cite> as cited in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib101" title="">101</a>]</cite>)</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.18.17"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.18.17.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.18.17.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.18.17.2.1"> <span class="ltx_p" id="S5.T3.4.18.17.2.1.1">A user is able to configure the tests for either speed or coverage <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib9" title="">9</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.19.18"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S5.T3.4.19.18.1" style="width:65.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.19.18.1.1"> <span class="ltx_p" id="S5.T3.4.19.18.1.1.1">Functionality</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.19.18.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.19.18.2.1"> <span class="ltx_p" id="S5.T3.4.19.18.2.1.1">Capability to optimise existing sets of test programs <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib19" title="">19</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.20.19"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.20.19.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.20.19.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.20.19.2.1"> <span class="ltx_p" id="S5.T3.4.20.19.2.1.1">Generated tests are applicable at both the design stage and post-manufacture to find design faults (bugs) and manufacturing defects.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.21.20"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.21.20.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.21.20.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.21.20.2.1"> <span class="ltx_p" id="S5.T3.4.21.20.2.1.1">Pipelined processors can be tested where the behaviour is determined by the sequence of instructions and the interaction between their operands <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib20" title="">20</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.22.21"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.22.21.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.22.21.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.22.21.2.1"> <span class="ltx_p" id="S5.T3.4.22.21.2.1.1">The tester infers the relationship between the verification environment’s initial state and the generation success of all subsequent instructions in the test <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib32" title="">32</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.23.22"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.23.22.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.23.22.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.23.22.2.1"> <span class="ltx_p" id="S5.T3.4.23.22.2.1.1">Undefined (but necessary) coverage points are identified automatically <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib29" title="">29</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.24.23"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S5.T3.4.24.23.1" style="width:65.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.24.23.1.1"> <span class="ltx_p" id="S5.T3.4.24.23.1.1.1">Generalisable</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.24.23.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.24.23.2.1"> <span class="ltx_p" id="S5.T3.4.24.23.2.1.1">Minimal human effort and expertise is required to set up and use the test environment</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.25.24"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.25.24.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.25.24.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.25.24.2.1"> <span class="ltx_p" id="S5.T3.4.25.24.2.1.1">Flexible to verify different design elements* <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib93" title="">93</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.26.25"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.26.25.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.26.25.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.26.25.2.1"> <span class="ltx_p" id="S5.T3.4.26.25.2.1.1">Flexible to verify different coverage models* <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib93" title="">93</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.27.26"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.27.26.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.27.26.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.27.26.2.1"> <span class="ltx_p" id="S5.T3.4.27.26.2.1.1">Flexible to verify at different levels of abstraction* <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib111" title="">111</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.28.27"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.28.27.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.28.27.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.28.27.2.1"> <span class="ltx_p" id="S5.T3.4.28.27.2.1.1">Easy to verify multiple objectives or at worse to verify for different objectives* <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib12" title="">12</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.29.28"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S5.T3.4.29.28.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S5.T3.4.29.28.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.29.28.2.1"> <span class="ltx_p" id="S5.T3.4.29.28.2.1.1">Does not require design specific information beyond that which is available in the design specification <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib100" title="">100</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib101" title="">101</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib19" title="">19</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S5.T3.4.30.29"> <td class="ltx_td ltx_align_middle ltx_border_b ltx_border_l ltx_border_r" id="S5.T3.4.30.29.1" style="width:65.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S5.T3.4.30.29.2" style="width:303.5pt;"> <span class="ltx_inline-block ltx_align_top" id="S5.T3.4.30.29.2.1"> <span class="ltx_p" id="S5.T3.4.30.29.2.1.1">Test vectors generated at high abstraction levels can be reused to test at lower levels of abstraction to reduce the cost and the overall time for verification and testing <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib111" title="">111</a>]</cite>.</span> </span> </td> </tr> </tbody> </table> </figure> <div class="ltx_pagination ltx_role_newpage"></div> </section> </section> <section class="ltx_section" id="S6" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">6 </span>Training and Learning Methods</h2> <div class="ltx_para" id="S6.p1"> <p class="ltx_p" id="S6.p1.1">Except unsupervised techniques, all methods in the sampled literature required a process of learning to improve the method’s performance. The type of learning fell into one of three categories: </p> <ul class="ltx_itemize" id="S6.I1"> <li class="ltx_item" id="S6.I1.ix1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S6.I1.ix1.p1"> <p class="ltx_p" id="S6.I1.ix1.p1.1"><span class="ltx_text ltx_font_bold" id="S6.I1.ix1.p1.1.1">Online</span>: the model learns while it is being used, in some instances, influencing the collection of new data.</p> </div> </li> <li class="ltx_item" id="S6.I1.ix2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S6.I1.ix2.p1"> <p class="ltx_p" id="S6.I1.ix2.p1.1"><span class="ltx_text ltx_font_bold" id="S6.I1.ix2.p1.1.1">Offline</span>: all training data is available during model creation. The model is not retrained regularly.</p> </div> </li> <li class="ltx_item" id="S6.I1.ix3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S6.I1.ix3.p1"> <p class="ltx_p" id="S6.I1.ix3.p1.1"><span class="ltx_text ltx_font_bold" id="S6.I1.ix3.p1.1.1">Hybrid</span>: a small set of training data is used to initialise the model, and new information is regularly integrated during the model’s use.</p> </div> </li> </ul> </div> <figure class="ltx_figure" id="S6.F7"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="249" id="S6.F7.g1" src="x8.png" width="415"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S6.F7.2.1.1" style="font-size:90%;">Figure 7</span>: </span><span class="ltx_text" id="S6.F7.3.2" style="font-size:90%;">The number of papers by learning type.</span></figcaption> </figure> <div class="ltx_para" id="S6.p2"> <p class="ltx_p" id="S6.p2.1">Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S6.F7" title="Figure 7 ‣ 6 Training and Learning Methods ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">7</span></a> shows the distribution of work by learning type. Online learning is synonymous with reinforcement learning and genetic algorithms that require feedback to guide their learning. These approaches trade weaker initial performance for the continuous integration of new information. Conversely, offline learning favours techniques where large amounts of information is available, the cost of errors is high, or training times are long relative to the time to collect new information. Hybrid learning is a trade-off between online and offline learning. One example compared online and offline learning, finding online learning had lower overall accuracy, but a lower retraining time made it more scalable compared to offline learning.</p> </div> <div class="ltx_para" id="S6.p3"> <p class="ltx_p" id="S6.p3.1">In the literature, many offline learning methods used training data obtained through random based test generation <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib41" title="">41</a>]</cite>. Since random-based methods are common in microelectronic device verification, there is likely to be an abundance of this type of data. However, as with other fields of ML, learning requires a balanced, unbiased, dataset. Randomly generated data sets for a DUV may not achieve this if, for example, some coverage points are hit substantially more regularly than others. Balancing datasets is discussed, but in general the sampled literature does not examine how information collection may affect the machine learning performance.</p> </div> <div class="ltx_para" id="S6.p4"> <p class="ltx_p" id="S6.p4.1">Online or hybrid methods retrained regularly in small batches were commonly used when selecting constraints or DUV inputs based on novelty. Novelty is measured against past examples <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib115" title="">115</a>]</cite>. A novel example may not be novel over time after more examples have been seen, necessitating regular retraining to keep the machine learning assessment relevant. Termed “concept drift” in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib45" title="">45</a>]</cite>, the choice of when to deploy a model and how to retrain can be important. Once deployed, the learner influences the future examples it will be retrained on, potentially preventing sufficient exploration of the DUV’s states to be verified, leading to performance that decreases over time.</p> </div> <div class="ltx_para" id="S6.p5"> <p class="ltx_p" id="S6.p5.1">Overall, online is the most common learning approach. In an industrial design and verification process, design changes and continuous production of simulation data mean that all machine learning applications would benefit from integrating new information. The question is how and when to retrain and any associated trade-off between accuracy and training time. This question is not commonly addressed in the literature. Research often frames verification of microelectronic devices as a “one-time” learning problem. A challenge for future research is to move towards solutions suitable for the iterative and rapidly changing designs seen in an industrial setting.</p> </div> </section> <section class="ltx_section" id="S7" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">7 </span>The Use of Machine Learning for Coverage Closure</h2> <div class="ltx_para" id="S7.p1"> <p class="ltx_p" id="S7.p1.1">In this section, we discuss coverage models and the application of machine learning techniques to coverage closure. Coverage closure is the activity of testing all points within a coverage model, and it was the most widely researched verification topic in the sampled literature. </p> </div> <section class="ltx_subsection" id="S7.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">7.1 </span>Coverage Models</h3> <div class="ltx_para" id="S7.SS1.p1"> <p class="ltx_p" id="S7.SS1.p1.1">Coverage models are derived from a DUV’s verification plan. Points in models represent functionality of interest to the verification team. A typical project may contain hundreds of these models, and they are typically used to track verification progress. Coverage closure is reached when the number of verified points (the functionality has been shown to be correct against the specification) passes a threshold. Achieving coverage closure is one of the conditions for a design going to production. Research frequently bases an objective function or classification on coverage models. For instance, a common formulation attempts to learn the relationship between the constraints applied to a random test generator and the coverage points hit.</p> </div> <figure class="ltx_figure" id="S7.F8"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="399" id="S7.F8.g1" src="x9.png" width="664"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S7.F8.2.1.1" style="font-size:90%;">Figure 8</span>: </span><span class="ltx_text" id="S7.F8.3.2" style="font-size:90%;">The number of examples found by coverage model. Where more than one coverage model is used in a single paper, these are listed separately.</span></figcaption> </figure> <div class="ltx_para" id="S7.SS1.p2"> <p class="ltx_p" id="S7.SS1.p2.1">Given the importance of coverage models in microelectronic device verification, it is unsurprising that approximately 90% of the sampled literature used a coverage model. There were two classes of model used (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.F8" title="Figure 8 ‣ 7.1 Coverage Models ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">8</span></a>). Structural models derive automatically from the design and include code (statement, branch, expression), FSM and instruction. Functional models are created from a DUV’s specification and include cross-product and assertion models. Functional models are commonly created by experts, although there is research into using machine learning (especially large language models) to assist in their creation. A proportion of work using functional models targeted the range of values for a signal. For instance, the output of an ALU. These applications were categorised as “Signal Values”.</p> </div> <div class="ltx_para" id="S7.SS1.p3"> <p class="ltx_p" id="S7.SS1.p3.1">To preserve information, specialist types of “models” not traditionally associated with coverage have been included, where the models are used for a similar purpose. Bug coverage models are used by works that seek to replicate or test previously identified bugs. Modular coverage “models” seek to record the number of cycles a particular module within the DUV is active during simulation. Their use is seen in papers testing communication devices at the SoC level.</p> </div> <div class="ltx_para" id="S7.SS1.p4"> <p class="ltx_p" id="S7.SS1.p4.1">Three papers used more than one type of coverage model. Presenting results obtained with multiple types of coverage models helps to demonstrate a technique generalises.</p> </div> <div class="ltx_para" id="S7.SS1.p5"> <p class="ltx_p" id="S7.SS1.p5.1">Several weaknesses were also present in the literature. Only two examples were seen in the sampled literature of ML applied in conjunction with assertion-based models <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib104" title="">104</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib48" title="">48</a>]</cite>. Assertion models are used in both dynamic and static (formal) methods and it is surprising to not find them better represented.</p> </div> <div class="ltx_para" id="S7.SS1.p6"> <p class="ltx_p" id="S7.SS1.p6.1">Functional models were sometimes vaguely described, with 16 out of 40 models in this category described only as “Functional” without further qualification of the model. A clear definition of functional models is important to assess the complexity of the learning problem. Some authors comment on the relatedness of a coverage model to a DUV’s input space, but most do not. Clear definitions of coverage models are also necessary to enable others to repeat a piece of work.</p> </div> <div class="ltx_para" id="S7.SS1.p7"> <p class="ltx_p" id="S7.SS1.p7.1">The number of points in a coverage model (size) may also affect the choice of machine learning, the complexity of the problem and the amount of training required. A large coverage model often results in a large output space for machine learning. However, research did not always give the size of the model. Approximately one third of the coverage models seen were of unspecified size. Instead, authors would more commonly describe the coverage as a percentage of the total number of coverage points hit at least once. Where the size of a model was given, the smallest model had one coverage point representing a FIFO buffer full condition <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib103" title="">103</a>]</cite>, and the largest had 430000 coverage points for an unspecified industrial design <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib68" title="">68</a>]</cite>. The median size the coverage models was 433 for functional models, slightly larger than the 100 for structural models (Table <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.T4" title="Table 4 ‣ 7.1 Coverage Models ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">4</span></a>).</p> </div> <figure class="ltx_table" id="S7.T4"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S7.T4.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S7.T4.2.1.1"> <th class="ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t" id="S7.T4.2.1.1.1"></th> <th class="ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T4.2.1.1.2">Functional</th> <th class="ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T4.2.1.1.3">Structural</th> <th class="ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T4.2.1.1.4">Other</th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S7.T4.2.2.1"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t" id="S7.T4.2.2.1.1">Median</th> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_t" id="S7.T4.2.2.1.2">443</td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_t" id="S7.T4.2.2.1.3">100</td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_t" id="S7.T4.2.2.1.4">33</td> </tr> <tr class="ltx_tr" id="S7.T4.2.3.2"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t" id="S7.T4.2.3.2.1">Maximum</th> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_t" id="S7.T4.2.3.2.2">430000</td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_t" id="S7.T4.2.3.2.3">2590</td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_t" id="S7.T4.2.3.2.4">10394</td> </tr> <tr class="ltx_tr" id="S7.T4.2.4.3"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S7.T4.2.4.3.1">Minimum</th> <td class="ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t" id="S7.T4.2.4.3.2">1</td> <td class="ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t" id="S7.T4.2.4.3.3">4</td> <td class="ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t" id="S7.T4.2.4.3.4">4</td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S7.T4.3.1.1" style="font-size:90%;">Table 4</span>: </span><span class="ltx_text" id="S7.T4.4.2" style="font-size:90%;">The number of points used in coverage models. Research that either did not use coverage models or did not specify their size is not shown. Where a single piece of research used different types of coverage model, the size of each is included as a separate value. Some research uses different models of the same type, for example when applying a technique to different designs. Where this occurs, the largest and smallest model size is included.</span></figcaption> </figure> <div class="ltx_para" id="S7.SS1.p8"> <p class="ltx_p" id="S7.SS1.p8.1">The size of a coverage model does not necessarily reflect the complexity of using it to train a machine-learning model. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib66" title="">66</a>]</cite>, two DUVs are used with different coverage models, and the authors state one model has coverage points that are harder to hit. Similarly, in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib43" title="">43</a>]</cite>, multiple models are used to optimise coverage closure at the test level. Two coverage models are subsequently carried forward to optimise at the transaction level because these models were harder to hit. This discussion about the complexity of the learning problem was rarely seen in the literature but is valuable to anyone applying the technique to a new application.</p> </div> <div class="ltx_para" id="S7.SS1.p9"> <p class="ltx_p" id="S7.SS1.p9.1">Demonstrating the generality of a technique requires applying it to different coverage models. It is unlikely a practitioner would use exactly the same DUV or coverage models as the research. There are many examples of research that compare different machine learning approaches <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib42" title="">42</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib37" title="">37</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib13" title="">13</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib38" title="">38</a>]</cite>, but very few compare a method’s performance against different coverage models.</p> </div> <div class="ltx_para" id="S7.SS1.p10"> <p class="ltx_p" id="S7.SS1.p10.1">Overall, coverage models were commonly used in the sampled literature. While some examples exist of research specifying the type of model, its size, and the complexity of relating a DUV’s input space to a coverage model, this information is often incomplete or not provided.</p> </div> </section> <section class="ltx_subsection" id="S7.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">7.2 </span>The ML-Enhanced Verification Environment</h3> <div class="ltx_para" id="S7.SS2.p1"> <p class="ltx_p" id="S7.SS2.p1.1">Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.F9" title="Figure 9 ‣ 7.2 The ML-Enhanced Verification Environment ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">9</span></a> shows a simplified view of a simulation-based test flow used in ML research for coverage closure. It modifies the traditional approach (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S3.F3" title="Figure 3 ‣ 3.4 The Verification Environment ‣ 3 Background ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">3</span></a>) by replacing a human expert with an ML-based test controller. Generated tests are sent to a simulator and golden reference model. The simulation drives the DUV to different states and produces outputs that are compared with the reference from the golden model. During the test, the DUV’s states are monitored to record coverage. Research can be differentiated based on the construction and operation of the ML-based test controller.</p> </div> <figure class="ltx_figure" id="S7.F9"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="259" id="S7.F9.g1" src="x10.png" width="788"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S7.F9.2.1.1" style="font-size:90%;">Figure 9</span>: </span><span class="ltx_text" id="S7.F9.3.2" style="font-size:90%;">A simplified simulation-based test flow for functional verification using machine learning. Typically, the ML controller supplies tests to the testbench, which can include machine-readable instructions, parameters for a pseudo-random test generator, or bit-level stimuli. It is common for ML-applications to be written in a different environment and require an interface to connect with the testbench.</span></figcaption> </figure> <div class="ltx_para" id="S7.SS2.p2"> <p class="ltx_p" id="S7.SS2.p2.1">Using a random test generator is viewed in the literature as the most basic form of testing, and it is often the baseline against which authors measure the success of proposed improvements. Instructions are generated randomly, usually with the constraint that only legal instruction sequences are generated. Given sufficient time, this will in principle cover all states of the DUV and therefore the coverage model, but no guarantees are made on wall-time taken or the distribution of the coverage points hit. If random generation is at one end of a spectrum, then in principle, there exists an optimal method at the other end that can find the minimum number of instructions necessary to cover the coverage model with an even distribution across the coverage points. All the literature in this section proposes a form of test controller that falls somewhere on this spectrum. Each aims to beat random and come as close as possible to the optimal method.</p> </div> </section> <section class="ltx_subsection" id="S7.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">7.3 </span>The Application of ML to Coverage Closure</h3> <div class="ltx_para" id="S7.SS3.p1"> <p class="ltx_p" id="S7.SS3.p1.1">The applications of ML to coverage closure seen in the literature can be classified based on how the ML-based test controller supplies tests to a testbench (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.F9" title="Figure 9 ‣ 7.2 The ML-Enhanced Verification Environment ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">9</span></a>). In test generation, a ML model is used to generate input sequences to a DUV. For test direction, a ML model is used to enhance the choice of parameters used in an existing generation method (usually a constrained random test generation). And in test selection, machine learning is used to choose input sequences from a pre-generated set.</p> </div> <div class="ltx_para" id="S7.SS3.p2"> <p class="ltx_p" id="S7.SS3.p2.1">ML has been applied to three different input spaces: parameter, test, and DUV inputs. The parameter space contains the constraints, weights and hyper-parameterises that change the operation of the generation method. The test space comprises sequences of inputs, and these can be written at different levels of abstraction, including as opcodes or bit patterns. Finally, the DUV input space contains the inputs driven into the DUV and is (usually) represented at the bit level. Although, there are examples of some behavioural models driving the DUV model with signals at a higher level of abstraction <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib49" title="">49</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib22" title="">22</a>]</cite>.</p> </div> <div class="ltx_para" id="S7.SS3.p3"> <p class="ltx_p" id="S7.SS3.p3.1">In the following sections, the use of ML is discussed by the its type, where it is applied in the conventional test flow, input space, and abstraction level.</p> </div> </section> <section class="ltx_subsection" id="S7.SS4"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">7.4 </span>Test Generation</h3> <div class="ltx_para" id="S7.SS4.p1"> <p class="ltx_p" id="S7.SS4.p1.1">In test generation, a machine learning model creates the inputs that drive a DUV to different states without using an intermediate mechanism such as a constrained random generator. We found evolutionary and reinforcement learning techniques used to build these test generators.</p> </div> <figure class="ltx_table" id="S7.T5"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S7.T5.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S7.T5.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S7.T5.2.1.1.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.1.1.1.1"> <span class="ltx_p" id="S7.T5.2.1.1.1.1.1"><span class="ltx_text ltx_font_bold" id="S7.T5.2.1.1.1.1.1.1">Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T5.2.1.1.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.1.1.2.1"> <span class="ltx_p" id="S7.T5.2.1.1.2.1.1"><span class="ltx_text ltx_font_bold" id="S7.T5.2.1.1.2.1.1.1">Sub-Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T5.2.1.1.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.1.1.3.1"> <span class="ltx_p" id="S7.T5.2.1.1.3.1.1"><span class="ltx_text ltx_font_bold" id="S7.T5.2.1.1.3.1.1.1">References</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S7.T5.2.2.1"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S7.T5.2.2.1.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.2.1.1.1"> <span class="ltx_p" id="S7.T5.2.2.1.1.1.1">Reinforcement Learning</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T5.2.2.1.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.2.1.2.1"> <span class="ltx_p" id="S7.T5.2.2.1.2.1.1">-</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T5.2.2.1.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.2.1.3.1"> <span class="ltx_p" id="S7.T5.2.2.1.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib95" title="">95</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib49" title="">49</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib75" title="">75</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib38" title="">38</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib72" title="">72</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T5.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S7.T5.2.3.2.1" rowspan="2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.3.2.1.1"> <span class="ltx_p" id="S7.T5.2.3.2.1.1.1">Evolutionary Algorithm</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T5.2.3.2.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.3.2.2.1"> <span class="ltx_p" id="S7.T5.2.3.2.2.1.1">Genetic Algorithm</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T5.2.3.2.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.3.2.3.1"> <span class="ltx_p" id="S7.T5.2.3.2.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib93" title="">93</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib22" title="">22</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib61" title="">61</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib107" title="">107</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T5.2.4.3"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T5.2.4.3.1" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.4.3.1.1"> <span class="ltx_p" id="S7.T5.2.4.3.1.1.1">Genetic Program</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T5.2.4.3.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.4.3.2.1"> <span class="ltx_p" id="S7.T5.2.4.3.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib111" title="">111</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib19" title="">19</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib27" title="">27</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T5.2.5.4"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S7.T5.2.5.4.1" rowspan="2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.5.4.1.1"> <span class="ltx_p" id="S7.T5.2.5.4.1.1.1">Supervised</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T5.2.5.4.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.5.4.2.1"> <span class="ltx_p" id="S7.T5.2.5.4.2.1.1">NN* (deep)</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T5.2.5.4.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.5.4.3.1"> <span class="ltx_p" id="S7.T5.2.5.4.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib4" title="">4</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T5.2.6.5"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T5.2.6.5.1" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.6.5.1.1"> <span class="ltx_p" id="S7.T5.2.6.5.1.1.1">NN* (linear)</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T5.2.6.5.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.6.5.2.1"> <span class="ltx_p" id="S7.T5.2.6.5.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib21" title="">21</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T5.2.7.6"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S7.T5.2.7.6.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.7.6.1.1"> <span class="ltx_p" id="S7.T5.2.7.6.1.1.1">Combination</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S7.T5.2.7.6.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.7.6.2.1"> <span class="ltx_p" id="S7.T5.2.7.6.2.1.1">-</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S7.T5.2.7.6.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T5.2.7.6.3.1"> <span class="ltx_p" id="S7.T5.2.7.6.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib103" title="">103</a>]</cite></span> </span> </td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S7.T5.3.1.1" style="font-size:90%;">Table 5</span>: </span><span class="ltx_text" id="S7.T5.4.2" style="font-size:90%;">Use of machine learning in test generation. *Neural Network.</span></figcaption> </figure> <section class="ltx_subsubsection" id="S7.SS4.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">7.4.1 </span>Machine Learning Types</h4> <div class="ltx_para" id="S7.SS4.SSS1.p1"> <p class="ltx_p" id="S7.SS4.SSS1.p1.1"><span class="ltx_text ltx_font_bold" id="S7.SS4.SSS1.p1.1.1">Evolutionary Algorithms:</span> Examples of evolutionary algorithms used for test generation are seen from <cite class="ltx_cite ltx_citemacro_citet">Smith et al. [<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib93" title="">93</a>]</cite>’s early work in 1997 to the present day <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib107" title="">107</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib72" title="">72</a>]</cite>.</p> </div> <div class="ltx_para" id="S7.SS4.SSS1.p2"> <p class="ltx_p" id="S7.SS4.SSS1.p2.1">Techniques in this area are primarily differentiated by their use of either a Genetic Algorithm (GA) or Genetic Programming (GP) approach. The difference between the two is subtle in the case of test generation. Both GP and GA generate instructions, but GP evolves a program with structures like loops and branches, while GA evolves an array of instructions. For instance, GP approaches reviewed used directed graphs to represent the flow of a program <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib27" title="">27</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib19" title="">19</a>]</cite>, or a sequence of inputs to a DUV <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib111" title="">111</a>]</cite>. In works using a GA, the encoding used was an array representing a sequence of inputs over time <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib93" title="">93</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib22" title="">22</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib61" title="">61</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib107" title="">107</a>]</cite>.</p> </div> <div class="ltx_para" id="S7.SS4.SSS1.p3"> <p class="ltx_p" id="S7.SS4.SSS1.p3.1">The inputs forming a genome in GA approaches range from low-level bit representations of opcodes, addresses and immediate values in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib107" title="">107</a>]</cite>, to high-level representations such as assembly code instructions to verify Cache Access Arbitration Mechanism in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib93" title="">93</a>]</cite> or a set of boolean’s indicating whether a message is sent between two addresses during Network-on-chip communication <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib61" title="">61</a>]</cite>.</p> </div> <div class="ltx_para" id="S7.SS4.SSS1.p4"> <p class="ltx_p" id="S7.SS4.SSS1.p4.1">In addition to the use of GP or GA and the encoding, the choice of algorithm was also a distinguishing feature. We found limited variety in works using GPs (two out of the three used the <math alttext="\mu" class="ltx_Math" display="inline" id="S7.SS4.SSS1.p4.1.m1.1"><semantics id="S7.SS4.SSS1.p4.1.m1.1a"><mi id="S7.SS4.SSS1.p4.1.m1.1.1" xref="S7.SS4.SSS1.p4.1.m1.1.1.cmml">μ</mi><annotation-xml encoding="MathML-Content" id="S7.SS4.SSS1.p4.1.m1.1b"><ci id="S7.SS4.SSS1.p4.1.m1.1.1.cmml" xref="S7.SS4.SSS1.p4.1.m1.1.1">𝜇</ci></annotation-xml><annotation encoding="application/x-tex" id="S7.SS4.SSS1.p4.1.m1.1c">\mu</annotation><annotation encoding="application/x-llamapun" id="S7.SS4.SSS1.p4.1.m1.1d">italic_μ</annotation></semantics></math>GP approach described in <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib19" title="">19</a>]</cite>). Greater variety in the algorithm was seen amongst works using GAs, specifically in how the selection and mutation operators were defined. This reflects the need to maintain legal encoding of genomes following an operator, and this requirement varied by applications. All work using EAs for test generation used a fitness function based on coverage to guide learning. However, these works differed in the complexity of this calculation. Some fitness functions were based on simple measures such as statement coverage <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib111" title="">111</a>]</cite>, whereas others, predominately used for fault detection (Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S9" title="9 The Use of Machine Learning for Fault Detection ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">9</span></a>), used multi-objective measures combining structural coverage models of State, Branch, Code, Expression and Toggle <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib82" title="">82</a>]</cite>.</p> </div> <div class="ltx_para" id="S7.SS4.SSS1.p5"> <p class="ltx_p" id="S7.SS4.SSS1.p5.1">Despite work in this area being differentiated by the choice of algorithm, how the test sequence is encoded, and the fitness function used, we found no discussion of the effect of each on the learning and its relative success. For instance, encoding as a graph enables the algorithm to operate on loops and jumps, whereas genome representations are limited to operating on a sequential array. Encoding as bit-level inputs gives a high level of control but the algorithm operates on a low level of semantic meaning. These decisions about how the evolutionary algorithms are applied is likely to affect learning, but there’s currently insufficient research to conclude their effect on coverage-closure.</p> </div> <div class="ltx_para" id="S7.SS4.SSS1.p6"> <p class="ltx_p" id="S7.SS4.SSS1.p6.1"><span class="ltx_text ltx_font_bold" id="S7.SS4.SSS1.p6.1.1">Reinforcement Learning:</span> The use of reinforcement learning (RL) to generate input sequences to a DUV has only been studied recently compared to Evolutionary approaches (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S4.F4" title="Figure 4 ‣ 4 The Distribution of Research by Topic ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">4</span></a>). RL has been demonstrated on small designs for functional coverage including an ALU <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib95" title="">95</a>]</cite> and LZW Compression Encoder <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib75" title="">75</a>]</cite>, and later works have applied RL for (structural) code coverage of a RISC-V design <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib38" title="">38</a>]</cite>. We found no examples of research which used reinforcement learning for <em class="ltx_emph ltx_font_italic" id="S7.SS4.SSS1.p6.1.2">functional</em> verification of a complex device at the level of a microprocessor. We view RL as the least proven of all the techniques surveyed for coverage closure.</p> </div> <div class="ltx_para" id="S7.SS4.SSS1.p7"> <p class="ltx_p" id="S7.SS4.SSS1.p7.1">RL has, in principle, properties that make it suited to test generation <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib49" title="">49</a>]</cite>. It acts to maximise total cumulative reward over a sequence of state-action pairs. Unlike supervised learning, it changes the state of the DUV, receiving immediate feedback which is used to inform its next action, and potentially avoiding sequences which do not add to coverage. Unlike evolutionary learning, it acts sequentially enabling greater control over the input sequence. Also, digital designs are inherently compatible with Markov Decision Processes, a representation used by modern RL techniques. A digital design can be represented as an FSM, where a state is completely described by the DUV’s current combinational and memory elements. Therefore, digital designs satisfy the Markovian property <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib90" title="">90</a>]</cite>.</p> </div> <div class="ltx_para" id="S7.SS4.SSS1.p8"> <p class="ltx_p" id="S7.SS4.SSS1.p8.1">One of the challenges for RL is that coverage may be insufficient information to guide learning. For example, a rare event or coverage hole may generate rewards too spare to guide the learning in a reasonable time <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib90" title="">90</a>]</cite>. For example to trigger rare assertions, in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib72" title="">72</a>]</cite>, one of the actions circumvented the reward signal and chose a test pattern found through static analysis of the code to target RTL code lines. A solution for white-box testing is to build the reward signal with additional monitors placed on internal signals, similar to that used in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib100" title="">100</a>]</cite>. There are also RL approaches for sparse rewards environments, but these were not seen in the sampled literature.</p> </div> <div class="ltx_para" id="S7.SS4.SSS1.p9"> <p class="ltx_p" id="S7.SS4.SSS1.p9.1">There is also the challenge of a large actions space. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib72" title="">72</a>]</cite> the solution was a set of actions which mutated the previous test pattern, limiting the action space but potentially encumbering the agent if the current test pattern (and it’s variants) place the agent on a poor trajectory. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib75" title="">75</a>]</cite>, the DUV was limited to 4-bit inputs to create an action space of 16.</p> </div> </section> <section class="ltx_subsubsection" id="S7.SS4.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">7.4.2 </span>Benefits of ML for Generative Techniques</h4> <div class="ltx_para" id="S7.SS4.SSS2.p1"> <p class="ltx_p" id="S7.SS4.SSS2.p1.1">More generally, we see benefits to using ML for test generation. The benefit of generative techniques is greater control over the test sequences than directive or selection techniques. This control may enable results closer to an ideal coverage curve. We found no examples in the literature which investigated this point. However, the literature suggests application for ML-enhanced test generation tied to edge cases where the level of control is beneficial. For instance, in functional coverage for an LZW encoder where input sequences are very specific <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib75" title="">75</a>]</cite> and random generation hit only 28 out of 136 coverage points, and in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib90" title="">90</a>]</cite> where RL was rewarded for finding rare events in an RLE compressor. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib49" title="">49</a>]</cite> RL was only beneficial in complex signalling scenarios where constrained-random struggled to achieve coverage. In this respect, the use of generative techniques is currently similar to formal techniques. Greater complexity and resources are balanced by their capability for coverage in edge cases. Unlike formal methods, RL and EA in principle scale to complex designs, evidenced in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib72" title="">72</a>]</cite> where an RL approach was able to find inputs to break security assertions where an industrial grade formal tool failed due to the complexity of the design. More research is needed to understand the trade-offs.</p> </div> </section> <section class="ltx_subsubsection" id="S7.SS4.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">7.4.3 </span>Challenges for Using ML to Generate Tests</h4> <div class="ltx_para" id="S7.SS4.SSS3.p1"> <p class="ltx_p" id="S7.SS4.SSS3.p1.1">A challenge when using ML with test generation is interfacing the machine learning elements with test benches written in languages that do not natively support ML functions. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib107" title="">107</a>]</cite>, a GA is wrapped into a UVM framework to create a standardised architecture usable with different DUVs. The challenge of interfacing ML techniques with existing test benches for test generation is more acute for RL because most authors used it to generate instructions in the loop with the DUV, thus requiring feedback after each instruction is processed. Authors using RL techniques interfaced models written in Python, with test benches written in hardware description languages such as SystemVerilog, and each presented architectures to enable a two-way flow of information on a per cycle basis. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib90" title="">90</a>]</cite>, an open-source library to allow RL-driven verification written in Python to interface with an existing SystemVerilog test bench is presented.</p> </div> <div class="ltx_para" id="S7.SS4.SSS3.p2"> <p class="ltx_p" id="S7.SS4.SSS3.p2.1">A further obstacle to using ML for test generation outside specific cases is the requirement to generate legal test sequences. Sequence legality is domain knowledge and there’s a question of how the ML acquires it. In the works using RL, authors defined the problem or the actions the ML could take such that any input sequence it generates was legal. For instance, applying RL to an ALU <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib95" title="">95</a>]</cite> or a LZW compression block <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib75" title="">75</a>]</cite> which accepts any combination of inputs. We did not find an RL example where learning the domain knowledge for legal sequences was included in the learning.</p> </div> <div class="ltx_para" id="S7.SS4.SSS3.p3"> <p class="ltx_p" id="S7.SS4.SSS3.p3.1">In EA approaches, the requirements for legal instructions were encoded in the genetic operators. For instance, in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib107" title="">107</a>]</cite>, constraints are placed on the location of cross-over operations to prevent invalid instructions from being created.</p> </div> <div class="ltx_para" id="S7.SS4.SSS3.p4"> <p class="ltx_p" id="S7.SS4.SSS3.p4.1">Restricting the problem to IP blocks that accept any input, while providing a valuable proof of concept, can be toy problems often not relevant to industry <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib49" title="">49</a>]</cite>. These block-level toy problems are at a level of complexity where a static analysis tool such as a SAT solver would be able to verify formally with an assurance of fully exploring the coverage space. A guarantee that stochastic machine learning techniques cannot give.</p> </div> <div class="ltx_para" id="S7.SS4.SSS3.p5"> <p class="ltx_p" id="S7.SS4.SSS3.p5.1">There are further challenges to using ML techniques for test generation in the EDA industry beyond demonstrating their capability to learn legal instruction sequences. Firstly, there is a resource cost to learning domain knowledge which may already be known to the verification engineers. Secondly, all examples in this review generated instructions to accelerate coverage closure for a specific version of a device. This means re-training may be required for each device change or when starting a new project. Thirdly, all the techniques required parameterisation by an expert. For instance, in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib49" title="">49</a>]</cite>, hyper-parameters including the episode length, number of episodes, neural network depth and layer width were manually chosen. Fourthly, the techniques researched for test generation are guided by reward or fitness functions. Some authors regard these “objective functions” as how verification engineers can focus the generation to areas of interest <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib90" title="">90</a>]</cite>, but most of the material surveyed based these functions on coverage. Using coverage models reduces the need for additional expertise beyond the existing verification process. However, some coverage models with hard to hit coverage points may give sparse feedback to the learner, and it’s unclear whether generic reward/fitness functions would work in all cases. Arguably, if a verification engineer is required to create fitness/reward functions to target the model’s output then the use of ML is shifting the design effort from writing test cases to setting ML models. This is undesirable unless a substantial time saving could be shown. Finally, the high cost of setting up current ML test generation techniques is especially evident at low coverage percentages. Both EA and RL techniques use stochasticity to explore the solution space (particularly at the start of training) and have been shown to perform no better than random stimulus <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib49" title="">49</a>]</cite> until coverage increases. There is an argument to be made that the stochastic exploration of these methods at low coverage may be of higher quality (from a learning perspective) than random generation, resulting in a better solution overall than techniques explored in the next section that use a randomly generated dataset with supervised methods. However, no research was found investigating this point.</p> </div> <div class="ltx_para" id="S7.SS4.SSS3.p6"> <p class="ltx_p" id="S7.SS4.SSS3.p6.1">Cumulatively, these reasons lead to a lack of generality, a need for specialist expertise, and high training costs, creating a barrier to industrial adoption. Applying ML to test direction instead of generation is a popular alternative which lowers the learning cost by removing the need to learn how to generate legal test sequences.</p> </div> </section> </section> <section class="ltx_subsection" id="S7.SS5"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">7.5 </span>Test Direction</h3> <div class="ltx_para" id="S7.SS5.p1"> <p class="ltx_p" id="S7.SS5.p1.1">We use Test Direction to describe applications that use ML to direct a piece of apparatus to generate test sequences.</p> </div> <div class="ltx_para" id="S7.SS5.p2"> <p class="ltx_p" id="S7.SS5.p2.1">Within Test Direction, we found works either targeted single hard-to-hit coverage holes or attempted to direct coverage to efficiently hit many coverage points. Bayesian Networks were an example of the former, after training they could be interegated to find the constraints most likely to hit a coverage point. GAs which structure the learning by changing the fitness function are an example of the latter, the learning drives the random-test generator to hit different coverage points.</p> </div> <div class="ltx_para" id="S7.SS5.p3"> <p class="ltx_p" id="S7.SS5.p3.1">Compared to Test Generation, a wide variety of <em class="ltx_emph ltx_font_italic" id="S7.SS5.p3.1.1">supervised</em> machine learning techniques have been applied to Test Direction including Bayesian Networks <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib33" title="">33</a>]</cite>, Inductive Logic Programming <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib105" title="">105</a>]</cite>, and Neural Network based techniques <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib28" title="">28</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib39" title="">39</a>]</cite>.</p> </div> <figure class="ltx_table" id="S7.T6"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S7.T6.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S7.T6.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S7.T6.2.1.1.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.1.1.1.1"> <span class="ltx_p" id="S7.T6.2.1.1.1.1.1"><span class="ltx_text ltx_font_bold" id="S7.T6.2.1.1.1.1.1.1">Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T6.2.1.1.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.1.1.2.1"> <span class="ltx_p" id="S7.T6.2.1.1.2.1.1"><span class="ltx_text ltx_font_bold" id="S7.T6.2.1.1.2.1.1.1">Sub-Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T6.2.1.1.3" style="width:140.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.1.1.3.1"> <span class="ltx_p" id="S7.T6.2.1.1.3.1.1"><span class="ltx_text ltx_font_bold" id="S7.T6.2.1.1.3.1.1.1">References</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S7.T6.2.2.1"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S7.T6.2.2.1.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.2.1.1.1"> <span class="ltx_p" id="S7.T6.2.2.1.1.1.1">Evolutionary Algorithm</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.2.1.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.2.1.2.1"> <span class="ltx_p" id="S7.T6.2.2.1.2.1.1">Genetic Algorithm</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.2.1.3" style="width:140.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.2.1.3.1"> <span class="ltx_p" id="S7.T6.2.2.1.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib12" title="">12</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib86" title="">86</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib48" title="">48</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib88" title="">88</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib92" title="">92</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T6.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S7.T6.2.3.2.1" rowspan="4" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.3.2.1.1"> <span class="ltx_p" id="S7.T6.2.3.2.1.1.1">Supervised</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.3.2.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.3.2.2.1"> <span class="ltx_p" id="S7.T6.2.3.2.2.1.1">NN* (recurrent)</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.3.2.3" style="width:140.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.3.2.3.1"> <span class="ltx_p" id="S7.T6.2.3.2.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib28" title="">28</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T6.2.4.3"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.4.3.1" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.4.3.1.1"> <span class="ltx_p" id="S7.T6.2.4.3.1.1.1">Bayesian Network</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.4.3.2" style="width:140.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.4.3.2.1"> <span class="ltx_p" id="S7.T6.2.4.3.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib33" title="">33</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib14" title="">14</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib32" title="">32</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib7" title="">7</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T6.2.5.4"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.5.4.1" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.5.4.1.1"> <span class="ltx_p" id="S7.T6.2.5.4.1.1.1">Inductive Logic Program</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.5.4.2" style="width:140.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.5.4.2.1"> <span class="ltx_p" id="S7.T6.2.5.4.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib105" title="">105</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T6.2.6.5"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.6.5.1" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.6.5.1.1"> <span class="ltx_p" id="S7.T6.2.6.5.1.1.1">Comparison</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.6.5.2" style="width:140.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.6.5.2.1"> <span class="ltx_p" id="S7.T6.2.6.5.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib13" title="">13</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib6" title="">6</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T6.2.7.6"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S7.T6.2.7.6.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.7.6.1.1"> <span class="ltx_p" id="S7.T6.2.7.6.1.1.1">Reinforcement Learning</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.7.6.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.7.6.2.1"> <span class="ltx_p" id="S7.T6.2.7.6.2.1.1">-</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T6.2.7.6.3" style="width:140.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.7.6.3.1"> <span class="ltx_p" id="S7.T6.2.7.6.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib39" title="">39</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib78" title="">78</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib52" title="">52</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib98" title="">98</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T6.2.8.7"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S7.T6.2.8.7.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.8.7.1.1"> <span class="ltx_p" id="S7.T6.2.8.7.1.1.1">Mixed</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S7.T6.2.8.7.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.8.7.2.1"> <span class="ltx_p" id="S7.T6.2.8.7.2.1.1">-</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S7.T6.2.8.7.3" style="width:140.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T6.2.8.7.3.1"> <span class="ltx_p" id="S7.T6.2.8.7.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib62" title="">62</a>]</cite></span> </span> </td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S7.T6.3.1.1" style="font-size:90%;">Table 6</span>: </span><span class="ltx_text" id="S7.T6.4.2" style="font-size:90%;">Use of machine learning in test direction. *Neural Network.</span></figcaption> </figure> <section class="ltx_subsubsection" id="S7.SS5.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">7.5.1 </span>Machine Learning Types</h4> <div class="ltx_para" id="S7.SS5.SSS1.p1"> <p class="ltx_p" id="S7.SS5.SSS1.p1.1"><span class="ltx_text ltx_font_bold" id="S7.SS5.SSS1.p1.1.1">Bayesian networks (BN)</span> were a popular technique for test direction in the 2000s (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S4.F4" title="Figure 4 ‣ 4 The Distribution of Research by Topic ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">4</span></a>), with early work in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib33" title="">33</a>]</cite> and <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib13" title="">13</a>]</cite>. A BN is a graphical representation of the joint probability distribution for a set of random variables. When used for test direction, these variables are parameters for a test generator (inputs), elements of a coverage model (outputs), and hidden nodes for which there is no physical evidence but (by expert knowledge) link inputs to output. An edge represents a relationship between two random variables. The network topology represents the domain knowledge of how test generator parameters relate to coverage. A fully connected network represents no domain knowledge <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib33" title="">33</a>]</cite>. Typically, authors divide the creation of a BN into three steps: define the topology, use a training set to learn the parameters of each node’s probability distribution, and interrogate the network to find the most probable inputs that would lead to a given coverage point. The ability to directly predict constraints needed to hit a coverage point gives the approach its power. However, a frequent criticism was the expertise and time required by a human to create the network topology, thereby limiting scalability and generality. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib32" title="">32</a>]</cite>, these criticisms were addressed using techniques which automatically created the Bayesian network, with later work by <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib7" title="">7</a>]</cite> to further assist their creation. Although Bayesian reasoning remains popular, the work on artificially created Bayesian networks appears to have stopped after <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib7" title="">7</a>]</cite>, with research interest switching to other techniques, including decision trees and neural networks. No research was found exploring how the inference power of BN compares to these other approaches, particularly for coverage points where there is no evidence (coverage holes).</p> </div> <div class="ltx_para" id="S7.SS5.SSS1.p2"> <p class="ltx_p" id="S7.SS5.SSS1.p2.1"><span class="ltx_text ltx_font_bold" id="S7.SS5.SSS1.p2.1.1">Genetic algorithms</span> were also a popular technique for test direction prior to the rise of interest in supervised techniques. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib12" title="">12</a>]</cite>, a GA is used to target buffer utilisations for a PowerPC architecture, in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib86" title="">86</a>]</cite> simplified models of a CPU and Router are used, and in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib92" title="">92</a>]</cite>, an ALU and Codix-RISC CPU is verified against structural and functional coverage models. The integration of a GA into UVM architecture is discussed in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib92" title="">92</a>]</cite>.</p> </div> <div class="ltx_para" id="S7.SS5.SSS1.p3"> <p class="ltx_p" id="S7.SS5.SSS1.p3.1">In test direction, a test generator produces many test programs and corresponding coverage hits for a single instance of input parameters (directives). We see authors structuring the learning by shaping the GA’s fitness function to achieve coverage across multiple objectives. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib12" title="">12</a>]</cite>, the directives to hit two objectives were evolved by first basing fitness on an 80:20 split for the two objectives, then changing to 50:50 once the first objective was met. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib86" title="">86</a>]</cite>, a four-stage fitness function was used which initially targets all coverage points at least once and then moves to target minimum coverage over four stages.</p> </div> <div class="ltx_para" id="S7.SS5.SSS1.p4"> <p class="ltx_p" id="S7.SS5.SSS1.p4.1">Authors derive the chromosome encoding directly from the parameter space of the generator, and because each generator has a different input space, there is no single “right” encoding to use. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib86" title="">86</a>]</cite>, the encoding is based on splitting probability distributions for each directive into cells and evolving the weight and width of each cell. The importance of how generator directives are encoded into a genome was also highlighted in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib12" title="">12</a>]</cite>, finding that encoding the biases into a structure improved the max buffer utilisation vs random organisation. This raises a difficulty in using GAs for test direction. Encoding affects the coverage closure performance, but each test generator has a different parameter space. Therefore, a practitioner would need to find a good encoding for each test generator used. Whether or not a universally “good” encoding exists for constrained-random test generators remains an open question.</p> </div> <div class="ltx_para" id="S7.SS5.SSS1.p5"> <p class="ltx_p" id="S7.SS5.SSS1.p5.1">Despite the success of GAs, the large number of parameters and expertise to setup a GA remains a blocker for their use in industry for test direction. We did not find work which researched the generality of their solutions, suggesting that the evolutionary process would need to be rerun for each coverage model and design change.</p> </div> <div class="ltx_para" id="S7.SS5.SSS1.p6"> <p class="ltx_p" id="S7.SS5.SSS1.p6.1"><span class="ltx_text ltx_font_bold" id="S7.SS5.SSS1.p6.1.1">Supervised Learning</span> Supervised techniques are trained on labelled data. The majority of work generates the training set based on the results from random test generation. We also see authors proposing approaches to reduce the size of the training set, such as a implicit filtering used in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib39" title="">39</a>]</cite>.</p> </div> <div class="ltx_para" id="S7.SS5.SSS1.p7"> <p class="ltx_p" id="S7.SS5.SSS1.p7.1">The abundance of labelled data during dynamic-based verification and the need to lessen the expertise and setup cost seen in other types of ML may explain the recent research interest in supervised techniques for test direction (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S4.F4" title="Figure 4 ‣ 4 The Distribution of Research by Topic ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">4</span></a>). Different base functions and techniques have been researched including neural networks, Bayesian networks and logic programs (Table <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.T6" title="Table 6 ‣ 7.5 Test Direction ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">6</span></a>). Applications seen range from block level IP, such as a comparator <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib6" title="">6</a>]</cite>, to complex devices including a powerPC pipeline<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib13" title="">13</a>]</cite>, RISC core <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib28" title="">28</a>]</cite> and five-stage pipelined superscalar DLX processor <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib105" title="">105</a>]</cite>.</p> </div> <div class="ltx_para" id="S7.SS5.SSS1.p8"> <p class="ltx_p" id="S7.SS5.SSS1.p8.1">One approach seen is to train a model to predict the mapping between constraints and coverage points <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib6" title="">6</a>]</cite>. Another is to predict the number of times to repeat a randomly generated test <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib62" title="">62</a>]</cite>, and in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib32" title="">32</a>]</cite>, relate the initial state of the DUV to the generation success. The variety of techniques and applications seen in the research suggests the flexibility of supervised techniques and suitability for test direction. However, all the supervised techniques found required parameterisation (as with GAs and Bayesian networks), so despite the recent interest, there remain the issues of generalisation, and the expertise to set up the learning.</p> </div> <div class="ltx_para" id="S7.SS5.SSS1.p9"> <p class="ltx_p" id="S7.SS5.SSS1.p9.1">Each test simulated on the DUV creates new labelled data relating the input parameter space to the coverage points hit. As discussed in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib39" title="">39</a>]</cite>, supervised methods make trade-offs based on how the generated data is used. First is the quantity of training data to acquire before using the ML model. A model trained on a small training set is likely to produce poor prediction at first but improve faster by reducing the probability of covering the same points. The trade-off is more time spent retaining the model as new data is generated. The second trade-off is the order coverage points are targeted. Targeting easier-to-hit coverage points at the start can achieve faster progress during early verification. Hard-to-hit points are then targeted later when more labelled data is available and the ML model is more mature. Alternatively, targeting hard-to-hit points during early verification (assuming they’re known) may fail but still advance coverage by hitting easier-to-hit points.</p> </div> <div class="ltx_para" id="S7.SS5.SSS1.p10"> <p class="ltx_p" id="S7.SS5.SSS1.p10.1"><span class="ltx_text ltx_font_bold" id="S7.SS5.SSS1.p10.1.1">Reinforcement learning</span> has had success in learning sequences of actions for complex functions where its actions are high level compared to the process they interact with (cite examples of Alpha Go, Atari Games etc). It is perhaps surprising that we found few examples of their use in Test Direction. One reason for this is the complexity of setting up the learner. Notably, each example for using RL with Test Direction used a different algorithm and framing of the problem. The problem of choosing constraints is framed as a Gaussian process multi-arm bandit problem in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib52" title="">52</a>]</cite>, and an upper-confidence bound approach is used to balance exploration vs exploitation when selecting which constraints to pick next. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib78" title="">78</a>]</cite>, the problem is framed as a hidden Markov model and uses a Raindow RL agent. Finally, in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib98" title="">98</a>]</cite>, the actions are constraints, cover points are states, and an actor-critic approach is used to train the RL agent.</p> </div> <div class="ltx_para" id="S7.SS5.SSS1.p11"> <p class="ltx_p" id="S7.SS5.SSS1.p11.1">Reinforcement Learning (RL) has the potential to outperform other methods. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib78" title="">78</a>]</cite>, an RL algorithm achieved slightly higher coverage in less time than an existing Genetic Algorithm (GA) method. However, this is the only example found in the sampled literature that compares RL to other machine learning methods. It remains an open question whether the additional cost and complexity of setting up an RL agent are justified by its potentially better performance for test direction.</p> </div> </section> <section class="ltx_subsubsection" id="S7.SS5.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">7.5.2 </span>Benefits of using ML to Direct Testing</h4> <div class="ltx_para" id="S7.SS5.SSS2.p1"> <p class="ltx_p" id="S7.SS5.SSS2.p1.1">In test direction, the ML does not generate instructions. This can circumvent many of the difficulties associated with generating legal instructions. It also enables domain knowledge to be embedded in the test generator, thereby reducing the size of the learning task. For instance, knowledge about which sequences of instructions and addresses create edge cases is more likely to uncover errors in a design. The reliance on a separate generator also makes it easier to interface the machine learning with existing test benches, with communication between the two occurring at the level of constraints that otherwise would have been written by an expert.</p> </div> </section> <section class="ltx_subsubsection" id="S7.SS5.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">7.5.3 </span>Challenges for using ML to Direct Testing</h4> <div class="ltx_para" id="S7.SS5.SSS3.p1"> <p class="ltx_p" id="S7.SS5.SSS3.p1.1">Machine learning faces a number of challenges when used to direct a device to generate tests. Firstly, feedback on the coverage achieved by a set of test directives occurs after the generated test sequence has been simulated on the DUV. Compared to Test Generation, feedback is slower, and the learner must wait until the end of the complete test sequence to see the results.</p> </div> <div class="ltx_para" id="S7.SS5.SSS3.p2"> <p class="ltx_p" id="S7.SS5.SSS3.p2.1">Secondly, industrial generators used for constrained random testing may contain thousands of parameters. A learner must identify those needed to cover a particular model.</p> </div> <div class="ltx_para" id="S7.SS5.SSS3.p3"> <p class="ltx_p" id="S7.SS5.SSS3.p3.1">Thirdly, a general challenge for using ML for coverage closure is to infer inputs needed to cover holes, creating a particular challenge for supervised techniques. A hole, by definition, does not appear in a training set. Unlike GAs and RLs, the supervised techniques seen here are not “active learners” in the sense they cannot explore a space, instead relying on the training examples presented to them. Therefore, supervised techniques place greater reliance on the inference power of the model. There is limited research which compares different model types. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib13" title="">13</a>]</cite>, the performance of a Bayesian network is compared to a tree classification technique. However, no research was found that directly investigated how the choice of model affected inference power for unseen examples.</p> </div> <div class="ltx_para" id="S7.SS5.SSS3.p4"> <p class="ltx_p" id="S7.SS5.SSS3.p4.1">Fourthly, a challenge for supervised techniques is creating high-quality training data. Training sets produced by random sampling are not guaranteed to provide an even spread of examples across the coverage space. Usually, the reverse is true, and these randomly produced data sets have many examples of easy-to-hit points and very few of the hard-to-hit points. Some authors attempt to combat this deficiency by shaping the training set.</p> </div> <div class="ltx_para" id="S7.SS5.SSS3.p5"> <p class="ltx_p" id="S7.SS5.SSS3.p5.1">Lastly, in the case of constrained random test generators, the output produced for a set of parameters is random. This stochasticity creates a probabilistic relationship between the input and coverage spaces. Therefore, the machine learning technique is required to learn from probabilistic relationships. These relationships are often more challenging to learn and require more training examples.</p> </div> </section> </section> <section class="ltx_subsection" id="S7.SS6"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">7.6 </span>Test Selection</h3> <div class="ltx_para" id="S7.SS6.p1"> <p class="ltx_p" id="S7.SS6.p1.1">In constrained random approaches, some tests do not add to coverage and can be considered redundant. Test selection is a technique which aims to reduce simulation time by filtering out redundant tests before they are run on the DUV. The research in this section does this <em class="ltx_emph ltx_font_italic" id="S7.SS6.p1.1.1">during</em> verification testing, which makes it distinct from techniques which run offline and aim to create an optimal test set for regular regression testing. In principle, test selection can reduce verification time when it is cheap to generate but expensive to simulate sequences of instructions on a device.</p> </div> <section class="ltx_subsubsection" id="S7.SS6.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">7.6.1 </span>Machine Learning Types</h4> <div class="ltx_para" id="S7.SS6.SSS1.p1"> <p class="ltx_p" id="S7.SS6.SSS1.p1.1">Research in test selection techniques can be split into two types based on whether knowledge of coverage is required. In the first type, tests are selected based on their similarity to previously simulated tests. This requires a measure of similarity but does not require knowledge of coverage. The assumption is that input sequences sufficiently dissimilar will hit different coverage points. Since coverage data is not required, research has focused on unsupervised learning, using a one-class SVM.</p> </div> <div class="ltx_para" id="S7.SS6.SSS1.p2"> <p class="ltx_p" id="S7.SS6.SSS1.p2.1">The second type of test selection technique learns a relationship between a test input and coverage. It uses this information to predict the likelihood a new test input will add to coverage. For instance, <cite class="ltx_cite ltx_citemacro_citet">Guo et al. [<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib45" title="">45</a>]</cite> uses a two-class SVM to select tests for full functional verification of a RISC processor (Godson-2). The disadvantage of this approach is that it requires simulating some redundant tests to initialise the machine learning model. However, it makes no assumption about the relationship between input similarity and coverage.</p> </div> <div class="ltx_para" id="S7.SS6.SSS1.p3"> <p class="ltx_p" id="S7.SS6.SSS1.p3.1">A test selected without knowledge of coverage will subsequently generate coverage data relating the input and output spaces of the DUV. This has led researchers to combine both test selection techniques in the same verification workflow. <cite class="ltx_cite ltx_citemacro_citet">Masamba et al. [<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib69" title="">69</a>]</cite> describes an approach that combines coverage with novelty-directed test selection to contribute to the verification of a commercial radar signal processing unit.</p> </div> <div class="ltx_para" id="S7.SS6.SSS1.p4"> <p class="ltx_p" id="S7.SS6.SSS1.p4.1">Interest in novelty detection extends outside of functional verification. This interest has created different approaches and approximately 40% of the work reviewed in this section compares two or more techniques. <cite class="ltx_cite ltx_citemacro_citet">Zheng et al. [<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib115" title="">115</a>]</cite> compares the use of an Autoencoder, counting unactivated neurons, and a technique which automatically generates labels to score tests based on coverage. <cite class="ltx_cite ltx_citemacro_citet">Ghany and Ismail [<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib41" title="">41</a>]</cite> investigates neural-network-based techniques and compares them to using an SVM and decision trees.</p> </div> <figure class="ltx_table" id="S7.T7"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S7.T7.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S7.T7.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S7.T7.2.1.1.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.1.1.1.1"> <span class="ltx_p" id="S7.T7.2.1.1.1.1.1"><span class="ltx_text ltx_font_bold" id="S7.T7.2.1.1.1.1.1.1">Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T7.2.1.1.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.1.1.2.1"> <span class="ltx_p" id="S7.T7.2.1.1.2.1.1"><span class="ltx_text ltx_font_bold" id="S7.T7.2.1.1.2.1.1.1">Sub-Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T7.2.1.1.3" style="width:160.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.1.1.3.1"> <span class="ltx_p" id="S7.T7.2.1.1.3.1.1"><span class="ltx_text ltx_font_bold" id="S7.T7.2.1.1.3.1.1.1">References</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S7.T7.2.2.1"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S7.T7.2.2.1.1" rowspan="2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.2.1.1.1"> <span class="ltx_p" id="S7.T7.2.2.1.1.1.1">Supervised</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T7.2.2.1.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.2.1.2.1"> <span class="ltx_p" id="S7.T7.2.2.1.2.1.1">SVM*</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T7.2.2.1.3" style="width:160.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.2.1.3.1"> <span class="ltx_p" id="S7.T7.2.2.1.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib83" title="">83</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib45" title="">45</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib17" title="">17</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib18" title="">18</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T7.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T7.2.3.2.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.3.2.1.1"> <span class="ltx_p" id="S7.T7.2.3.2.1.1.1">NN (deep)</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T7.2.3.2.2" style="width:160.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.3.2.2.1"> <span class="ltx_p" id="S7.T7.2.3.2.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib104" title="">104</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T7.2.4.3"> <td class="ltx_td ltx_align_middle ltx_border_l ltx_border_r" id="S7.T7.2.4.3.1" style="width:100.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T7.2.4.3.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.4.3.2.1"> <span class="ltx_p" id="S7.T7.2.4.3.2.1.1">Comparison</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T7.2.4.3.3" style="width:160.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.4.3.3.1"> <span class="ltx_p" id="S7.T7.2.4.3.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib115" title="">115</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib37" title="">37</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib70" title="">70</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib116" title="">116</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib41" title="">41</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib43" title="">43</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib66" title="">66</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib55" title="">55</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T7.2.5.4"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S7.T7.2.5.4.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.5.4.1.1"> <span class="ltx_p" id="S7.T7.2.5.4.1.1.1">Unsupervised</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T7.2.5.4.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.5.4.2.1"> <span class="ltx_p" id="S7.T7.2.5.4.2.1.1">SVM*</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T7.2.5.4.3" style="width:160.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.5.4.3.1"> <span class="ltx_p" id="S7.T7.2.5.4.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib47" title="">47</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T7.2.6.5"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S7.T7.2.6.5.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.6.5.1.1"> <span class="ltx_p" id="S7.T7.2.6.5.1.1.1">Mixed</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S7.T7.2.6.5.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.6.5.2.1"> <span class="ltx_p" id="S7.T7.2.6.5.2.1.1">-</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S7.T7.2.6.5.3" style="width:160.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T7.2.6.5.3.1"> <span class="ltx_p" id="S7.T7.2.6.5.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib69" title="">69</a>]</cite></span> </span> </td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S7.T7.3.1.1" style="font-size:90%;">Table 7</span>: </span><span class="ltx_text" id="S7.T7.4.2" style="font-size:90%;">Use of machine learning in test selection. *Support Vector Machine.</span></figcaption> </figure> </section> <section class="ltx_subsubsection" id="S7.SS6.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">7.6.2 </span>Benefits of Using ML to Select Tests</h4> <div class="ltx_para" id="S7.SS6.SSS2.p1"> <p class="ltx_p" id="S7.SS6.SSS2.p1.1">Compared to test direction and generation techniques, test selection can be the easier to integrate with existing verification environments. While there is evidence to suggest using coverage data can further reduce the number of simulated tests required to achieve coverage, a test selector which filters tests based only on the similarity of the input space has been shown to be effective; and does not require online learning or changes during a project. Given the wider interest in novelty detection within machine learning, and the EDA industry’s familiarity with test selection for regression optimisation, there is space for more research in test selection</p> </div> </section> </section> <section class="ltx_subsection" id="S7.SS7"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">7.7 </span>Level of Control</h3> <div class="ltx_para" id="S7.SS7.p1"> <p class="ltx_p" id="S7.SS7.p1.1">In general, the challenge of learning a relationship between input and output spaces depends on how abstract these spaces are compared to the underlying process that connects them. Abstraction is a part of the conventional EDA design process. Electronic hardware design creates models at different levels of abstraction, from behavioural to gate level. There is also research to reduce the cost of test generation by reusing tests at different levels of abstraction. For example, to automatically translate a test created at behavioural to gate level <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib111" title="">111</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib48" title="">48</a>]</cite>. In this section, we discuss the implications of the level of abstraction to the application of ML to coverage closure. The aim is to provide a practitioner with a granular means to discriminate between research on this topic.</p> </div> <div class="ltx_para" id="S7.SS7.p2"> <p class="ltx_p" id="S7.SS7.p2.1">In the ML-based verification environment (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.F9" title="Figure 9 ‣ 7.2 The ML-Enhanced Verification Environment ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">9</span></a>), three spaces are identified: parameter, instruction, and test, and each space can be represented differently (Table <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S7.T8" title="Table 8 ‣ 7.7 Level of Control ‣ 7 The Use of Machine Learning for Coverage Closure ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">8</span></a>). Inputs to a DUV are also described at different levels of abstraction, for instance, bit pattern (machine code), opcode and operand (assembly language), constraint, and signal value in a behavioural model (e.g., a traffic light controller), creating a wide range of options.</p> </div> <figure class="ltx_table" id="S7.T8"> <div class="ltx_flex_figure ltx_flex_table"> <div class="ltx_flex_cell ltx_flex_size_1"> <table class="ltx_tabular ltx_centering ltx_figure_panel ltx_guessed_headers ltx_align_middle" id="S7.T8.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S7.T8.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S7.T8.2.1.1.1" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T8.2.1.1.1.1"> <span class="ltx_p" id="S7.T8.2.1.1.1.1.1"><span class="ltx_text ltx_font_bold" id="S7.T8.2.1.1.1.1.1.1">Space where ML is applied</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T8.2.1.1.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T8.2.1.1.2.1"> <span class="ltx_p" id="S7.T8.2.1.1.2.1.1"><span class="ltx_text ltx_font_bold" id="S7.T8.2.1.1.2.1.1.1">Representation of the data</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S7.T8.2.2.1"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S7.T8.2.2.1.1" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T8.2.2.1.1.1"> <span class="ltx_p" id="S7.T8.2.2.1.1.1.1">Input parameter</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T8.2.2.1.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T8.2.2.1.2.1"> <span class="ltx_p" id="S7.T8.2.2.1.2.1.1">Constraints, random seed or hyper-parameters</span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T8.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S7.T8.2.3.2.1" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T8.2.3.2.1.1"> <span class="ltx_p" id="S7.T8.2.3.2.1.1.1">Instruction</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T8.2.3.2.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T8.2.3.2.2.1"> <span class="ltx_p" id="S7.T8.2.3.2.2.1.1">Opcode, signal value, or bit pattern</span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T8.2.4.3"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S7.T8.2.4.3.1" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T8.2.4.3.1.1"> <span class="ltx_p" id="S7.T8.2.4.3.1.1.1">Test</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S7.T8.2.4.3.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T8.2.4.3.2.1"> <span class="ltx_p" id="S7.T8.2.4.3.2.1.1">A test identifier, graphical representation of test sequence</span> </span> </td> </tr> </tbody> </table> </div> </div> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S7.T8.3.1.1" style="font-size:90%;">Table 8</span>: </span><span class="ltx_text" id="S7.T8.4.2" style="font-size:90%;">Examples of the abstractions used in machine learning for the verification of electronic hardware</span></figcaption><div class="ltx_flex_figure"> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_figure_panel ltx_align_center" id="S7.T8.5">.</p> </div> </div> </figure> <div class="ltx_para" id="S7.SS7.p3"> <p class="ltx_p" id="S7.SS7.p3.1">Research was found to apply machine learning to control one of these spaces at a specific level of abstraction. For instance, learning to control the instruction space at either the opcode level or bit level. Since these spaces and levels of abstraction are relatable to the same low-level design, this creates a choice for how to apply machine learning to achieve coverage closure.</p> </div> <div class="ltx_para" id="S7.SS7.p4"> <p class="ltx_p" id="S7.SS7.p4.1">A key question to consider is how the choice of space and level of abstraction affect the complexity of learning and the effectiveness of the machine learning model to speed up verification. However, we found very little material which sought to answer this question. <cite class="ltx_cite ltx_citemacro_citet">Gogri et al. [<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib42" title="">42</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib43" title="">43</a>]</cite> investigated the difference between filtering test stimuli at the instruction and constraint levels, finding that the machine learning applied at the constraint level was effective when the input space (constraints) and output space (coverage) were closely related. However, machine learning applied at the instruction level was more effective when this relationship was more complex.</p> </div> <div class="ltx_para" id="S7.SS7.p5"> <p class="ltx_p" id="S7.SS7.p5.1">From a learning perspective, the state space is smaller at higher levels of abstraction. A smaller state space may make learning easier, but the relationship between high-level instructions and low-level features may be less direct. Other authors highlighted that writing tests at high levels of abstraction and translating to the hardware level via a compiler may not be as successful as tests written at the hardware level. Compiler optimisations and strategies prioritise efficient input sequences. Therefore, these may not use the full range of all possible instructions and addressing modes <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib93" title="">93</a>]</cite>.</p> </div> <div class="ltx_para" id="S7.SS7.p6"> <p class="ltx_p" id="S7.SS7.p6.1">The choice of space and abstraction level is equivalent to feature selection, a crucial part of the success or otherwise of machine learning applications. Some research in coverage closure has attempted to automate feature selection, but the topic is under represented in the EDA literature.</p> </div> </section> <section class="ltx_subsection" id="S7.SS8"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">7.8 </span>The Use of Machine Learning for Coverage Collection and Analysis</h3> <figure class="ltx_table" id="S7.T9"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S7.T9.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S7.T9.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S7.T9.2.1.1.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T9.2.1.1.1.1"> <span class="ltx_p" id="S7.T9.2.1.1.1.1.1"><span class="ltx_text ltx_font_bold" id="S7.T9.2.1.1.1.1.1.1">Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T9.2.1.1.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T9.2.1.1.2.1"> <span class="ltx_p" id="S7.T9.2.1.1.2.1.1"><span class="ltx_text ltx_font_bold" id="S7.T9.2.1.1.2.1.1.1">Sub-Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S7.T9.2.1.1.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T9.2.1.1.3.1"> <span class="ltx_p" id="S7.T9.2.1.1.3.1.1"><span class="ltx_text ltx_font_bold" id="S7.T9.2.1.1.3.1.1.1">References</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S7.T9.2.2.1"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S7.T9.2.2.1.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T9.2.2.1.1.1"> <span class="ltx_p" id="S7.T9.2.2.1.1.1.1">Coverage Analysis</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T9.2.2.1.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T9.2.2.1.2.1"> <span class="ltx_p" id="S7.T9.2.2.1.2.1.1">Supervised</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S7.T9.2.2.1.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T9.2.2.1.3.1"> <span class="ltx_p" id="S7.T9.2.2.1.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib68" title="">68</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib40" title="">40</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S7.T9.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S7.T9.2.3.2.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T9.2.3.2.1.1"> <span class="ltx_p" id="S7.T9.2.3.2.1.1.1">Coverage Collection</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S7.T9.2.3.2.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T9.2.3.2.2.1"> <span class="ltx_p" id="S7.T9.2.3.2.2.1.1">Combination</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S7.T9.2.3.2.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S7.T9.2.3.2.3.1"> <span class="ltx_p" id="S7.T9.2.3.2.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib84" title="">84</a>]</cite></span> </span> </td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S7.T9.3.1.1" style="font-size:90%;">Table 9</span>: </span><span class="ltx_text" id="S7.T9.4.2" style="font-size:90%;">Use of machine learning for coverage analysis and coverage collection.</span></figcaption> </figure> <div class="ltx_para" id="S7.SS8.p1"> <p class="ltx_p" id="S7.SS8.p1.1">Dynamic-based test methods typically generate large amounts of coverage related information. Where the majority of techniques seen used coverage data to either directly or indirectly choose stimuli for the DUV, a small number of techniques took a different approach.</p> </div> <div class="ltx_para" id="S7.SS8.p2"> <p class="ltx_p" id="S7.SS8.p2.1">Collecting coverage data adds a computational overhead when simulating a design. The test-bench must monitor the relevant elements of a design via a scoreboard to record how often a coverage point is hit. Large coverage models increase this overhead causing simulations to take longer. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib84" title="">84</a>]</cite>, k-means is used to select a small subset of the design to collect coverage, and DNNs predict the coverage of the rest of the design from this small subset. The author’s claim this approach complements existing practice where regressions with full coverage collection are still run, but the technique enables a prediction of coverage in-between those full runs using less computational overhead.</p> </div> <div class="ltx_para" id="S7.SS8.p3"> <p class="ltx_p" id="S7.SS8.p3.1">Two examples were found using machine learning to exploit the relatedness of coverage points to reduce simulation time. Both apply the principle that, when a test hits a coverage point, it has a high probability of also hitting nearby coverage points. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib40" title="">40</a>]</cite>, clustering techniques using k-means and heuristics are used to identify coverage holes by grouping similar holes together and find a coverage point to target the group. The approach assumes that related coverage points have similar textual names. A similar approach is used in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib68" title="">68</a>]</cite>, except similarity between coverage points is based on Jaccard similarity and euclidean distance.</p> </div> </section> </section> <section class="ltx_section" id="S8" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">8 </span>The Use of Machine Learning For Bug Hunting</h2> <figure class="ltx_table" id="S8.T10"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S8.T10.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S8.T10.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S8.T10.2.1.1.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.1.1.1.1"> <span class="ltx_p" id="S8.T10.2.1.1.1.1.1"><span class="ltx_text ltx_font_bold" id="S8.T10.2.1.1.1.1.1.1">Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S8.T10.2.1.1.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.1.1.2.1"> <span class="ltx_p" id="S8.T10.2.1.1.2.1.1"><span class="ltx_text ltx_font_bold" id="S8.T10.2.1.1.2.1.1.1">Sub-Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S8.T10.2.1.1.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.1.1.3.1"> <span class="ltx_p" id="S8.T10.2.1.1.3.1.1"><span class="ltx_text ltx_font_bold" id="S8.T10.2.1.1.3.1.1.1">References</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S8.T10.2.2.1"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S8.T10.2.2.1.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.2.1.1.1"> <span class="ltx_p" id="S8.T10.2.2.1.1.1.1">Supervised</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S8.T10.2.2.1.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.2.1.2.1"> <span class="ltx_p" id="S8.T10.2.2.1.2.1.1">-</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S8.T10.2.2.1.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.2.1.3.1"> <span class="ltx_p" id="S8.T10.2.2.1.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib73" title="">73</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib89" title="">89</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S8.T10.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S8.T10.2.3.2.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.3.2.1.1"> <span class="ltx_p" id="S8.T10.2.3.2.1.1.1">Evolutionary Algorithm</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S8.T10.2.3.2.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.3.2.2.1"> <span class="ltx_p" id="S8.T10.2.3.2.2.1.1">Genetic Algorithm</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S8.T10.2.3.2.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.3.2.3.1"> <span class="ltx_p" id="S8.T10.2.3.2.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib11" title="">11</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S8.T10.2.4.3"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S8.T10.2.4.3.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.4.3.1.1"> <span class="ltx_p" id="S8.T10.2.4.3.1.1.1">Reinforcement Learning</span> </span> </td> <td class="ltx_td ltx_align_middle ltx_border_r ltx_border_t" id="S8.T10.2.4.3.2" style="width:80.0pt;"></td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S8.T10.2.4.3.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.4.3.3.1"> <span class="ltx_p" id="S8.T10.2.4.3.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib101" title="">101</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S8.T10.2.5.4"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S8.T10.2.5.4.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.5.4.1.1"> <span class="ltx_p" id="S8.T10.2.5.4.1.1.1">Combination</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S8.T10.2.5.4.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.5.4.2.1"> <span class="ltx_p" id="S8.T10.2.5.4.2.1.1">-</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S8.T10.2.5.4.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S8.T10.2.5.4.3.1"> <span class="ltx_p" id="S8.T10.2.5.4.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib46" title="">46</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib94" title="">94</a>]</cite></span> </span> </td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S8.T10.3.1.1" style="font-size:90%;">Table 10</span>: </span><span class="ltx_text" id="S8.T10.4.2" style="font-size:90%;">Use of machine learning for bug hunting.</span></figcaption> </figure> <div class="ltx_para" id="S8.p1"> <p class="ltx_p" id="S8.p1.1">In the literature, a small number of authors made a distinction between coverage closure that aims to measure verification progress against the DUV specification and bug hunting that attempts to replicate conditions expected to find bugs. A comparable with existing practice is where an expert writes a test program to target a small number of challenging DUV states. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib101" title="">101</a>]</cite>, this is described as “stress-testing” where a Markov model represents machine instructions and feedback from signal monitors in a design are used to update transition probabilities. Over time, the instruction sequences to excite signals of interest are generated more often. In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib73" title="">73</a>]</cite>, an approach using linear regression is described to replicate the conditions for a deadlock to occur, and in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib89" title="">89</a>]</cite>, a neural-network is trained to select constraints for a test generator to hit pre-defined bugs. The constraints were written by an expert.</p> </div> <div class="ltx_para" id="S8.p2"> <p class="ltx_p" id="S8.p2.1">The approaches described above assume knowledge of where bugs are most likely to occur in a design. An alternative approach is described in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib46" title="">46</a>]</cite>. Machine learning is used to predict bugs in designs based on historical data from design revisions. A genetic algorithm is used to select revision and design features that lead to bugs, and five supervised techniques are compared to predict how bugs are distributed in the different modules of the (untested) design. This information is used to allocate testing resource and target constrained random testing to target expected bugs.</p> </div> </section> <section class="ltx_section" id="S9" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">9 </span>The Use of Machine Learning for Fault Detection</h2> <figure class="ltx_table" id="S9.T11"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S9.T11.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S9.T11.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t" id="S9.T11.2.1.1.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S9.T11.2.1.1.1.1"> <span class="ltx_p" id="S9.T11.2.1.1.1.1.1"><span class="ltx_text ltx_font_bold" id="S9.T11.2.1.1.1.1.1.1">Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S9.T11.2.1.1.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S9.T11.2.1.1.2.1"> <span class="ltx_p" id="S9.T11.2.1.1.2.1.1"><span class="ltx_text ltx_font_bold" id="S9.T11.2.1.1.2.1.1.1">Sub-Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S9.T11.2.1.1.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S9.T11.2.1.1.3.1"> <span class="ltx_p" id="S9.T11.2.1.1.3.1.1"><span class="ltx_text ltx_font_bold" id="S9.T11.2.1.1.3.1.1.1">References</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S9.T11.2.2.1"> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S9.T11.2.2.1.1" rowspan="2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S9.T11.2.2.1.1.1"> <span class="ltx_p" id="S9.T11.2.2.1.1.1.1">Evolutionary Algorithm</span> </span> </th> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S9.T11.2.2.1.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S9.T11.2.2.1.2.1"> <span class="ltx_p" id="S9.T11.2.2.1.2.1.1">Genetic Algorithm</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S9.T11.2.2.1.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S9.T11.2.2.1.3.1"> <span class="ltx_p" id="S9.T11.2.2.1.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib97" title="">97</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib9" title="">9</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S9.T11.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S9.T11.2.3.2.1" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S9.T11.2.3.2.1.1"> <span class="ltx_p" id="S9.T11.2.3.2.1.1.1">Genetic Program</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S9.T11.2.3.2.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S9.T11.2.3.2.2.1"> <span class="ltx_p" id="S9.T11.2.3.2.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib82" title="">82</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib10" title="">10</a>]</cite></span> </span> </td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S9.T11.3.1.1" style="font-size:90%;">Table 11</span>: </span><span class="ltx_text" id="S9.T11.4.2" style="font-size:90%;">Use of machine learning for fault detection.</span></figcaption> </figure> <div class="ltx_para" id="S9.p1"> <p class="ltx_p" id="S9.p1.1">Research was classified as fault detection when machine learning was used to find input sequences to cause pre-defined design errors to be detected at a DUV’s output. The primary use for fault detection is to use pre-silicon simulations to find tests for in service and post-manufacture testing. For example, in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib9" title="">9</a>]</cite> a genetic algorithm is used find DUV input patterns to detect FPGA-configuration errors caused by single-upset events. All work in this section used genetic algorithms, and in the case of <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib9" title="">9</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib97" title="">97</a>]</cite> operated on bit-level sequences. Three of the four papers in this section were not found by the structured search. We chose to include them because their approach was similar to other work in the sampled literature and demonstrated the use of machine learning at a different level of abstraction. For example, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib82" title="">82</a>]</cite> explores the use of machine learning using multiple coverage metrics at different levels of abstraction to produce better coverage overall. The work in this section also shows the use of genetic algorithms to evolve tests that hit multiple objectives <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib97" title="">97</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib9" title="">9</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib10" title="">10</a>]</cite>, which has applications in coverage closure. There are established tools to exhaustively generate bit-level tests through formal or analytical techniques. These tools are conventionally referred to as Automatic Test Pattern Generators. The material in this section suggests the same ML techniques used for coverage closure also have applications at other stages in the verification process.</p> </div> </section> <section class="ltx_section" id="S10" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">10 </span>The Use of Machine Learning For Test Set Optimisation</h2> <figure class="ltx_table" id="S10.T12"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S10.T12.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S10.T12.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S10.T12.2.1.1.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.1.1.1.1"> <span class="ltx_p" id="S10.T12.2.1.1.1.1.1"><span class="ltx_text ltx_font_bold" id="S10.T12.2.1.1.1.1.1.1">Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S10.T12.2.1.1.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.1.1.2.1"> <span class="ltx_p" id="S10.T12.2.1.1.2.1.1"><span class="ltx_text ltx_font_bold" id="S10.T12.2.1.1.2.1.1.1">Sub-Type</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S10.T12.2.1.1.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.1.1.3.1"> <span class="ltx_p" id="S10.T12.2.1.1.3.1.1"><span class="ltx_text ltx_font_bold" id="S10.T12.2.1.1.3.1.1.1">References</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S10.T12.2.2.1"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S10.T12.2.2.1.1" rowspan="2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.2.1.1.1"> <span class="ltx_p" id="S10.T12.2.2.1.1.1.1">Supervised</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S10.T12.2.2.1.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.2.1.2.1"> <span class="ltx_p" id="S10.T12.2.2.1.2.1.1">Decision Trees</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S10.T12.2.2.1.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.2.1.3.1"> <span class="ltx_p" id="S10.T12.2.2.1.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib79" title="">79</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S10.T12.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S10.T12.2.3.2.1" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.3.2.1.1"> <span class="ltx_p" id="S10.T12.2.3.2.1.1.1">Ensemble</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S10.T12.2.3.2.2" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.3.2.2.1"> <span class="ltx_p" id="S10.T12.2.3.2.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib76" title="">76</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S10.T12.2.4.3"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S10.T12.2.4.3.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.4.3.1.1"> <span class="ltx_p" id="S10.T12.2.4.3.1.1.1">Evolutionary Algorithm</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S10.T12.2.4.3.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.4.3.2.1"> <span class="ltx_p" id="S10.T12.2.4.3.2.1.1">Genetic Algorithm</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S10.T12.2.4.3.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.4.3.3.1"> <span class="ltx_p" id="S10.T12.2.4.3.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib44" title="">44</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib113" title="">113</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S10.T12.2.5.4"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S10.T12.2.5.4.1" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.5.4.1.1"> <span class="ltx_p" id="S10.T12.2.5.4.1.1.1">Unsupervised</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S10.T12.2.5.4.2" style="width:80.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.5.4.2.1"> <span class="ltx_p" id="S10.T12.2.5.4.2.1.1">-</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S10.T12.2.5.4.3" style="width:120.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S10.T12.2.5.4.3.1"> <span class="ltx_p" id="S10.T12.2.5.4.3.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib53" title="">53</a>]</cite>, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib57" title="">57</a>]</cite></span> </span> </td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S10.T12.3.1.1" style="font-size:90%;">Table 12</span>: </span><span class="ltx_text" id="S10.T12.4.2" style="font-size:90%;">Use of machine learning for test set optimisation.</span></figcaption> </figure> <div class="ltx_para" id="S10.p1"> <p class="ltx_p" id="S10.p1.1">Test set optimisation is similar to the test selection activity seen in coverage closure, except the machine learning operates on sets with coverage data instead of singular tests. The objectives for the machine learning can be more diverse than seen in coverage closure. For instance, finding the set of tests that hit all coverage points in the minimum number of CPU cycles, where unlike coverage closure, hitting a coverage point once can be regarded as sufficient <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib113" title="">113</a>]</cite>. The machine learning in this section can also learn from a wider range of information including design change history and previous test results <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib76" title="">76</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib57" title="">57</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib53" title="">53</a>]</cite>. In particular, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib76" title="">76</a>]</cite> uses a ML pipeline to predict the failure probability of an existing test and create a test set based on changes in RTL code. The technique is notable for its use of an ensemble approach that combines the predictions of multiple (supervised) machine learning models. Unsupervised learning techniques are used to cluster tests in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib53" title="">53</a>]</cite>, and this can be combined with Principle Component Analysis to reduce the dimensions of the learning problem <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib57" title="">57</a>]</cite>.</p> </div> </section> <section class="ltx_section" id="S11" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">11 </span>Evaluation of Machine Learning in Dynamic Verification</h2> <div class="ltx_para" id="S11.p1"> <p class="ltx_p" id="S11.p1.1">Evaluating the performance of a proposed application of machine learning forms a crucial part of the reviewed research material. The section summarises the designs (DUVs) and metrics authors use to evaluate their proposed techniques. </p> </div> <section class="ltx_subsection" id="S11.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">11.1 </span>Designs, Test Suites and Benchmarks</h3> <div class="ltx_para" id="S11.SS1.p1"> <p class="ltx_p" id="S11.SS1.p1.1">A variety of designs have been used to evaluate machine learning techniques for electronic hardware verification. These designs range in functional complexity from simple blocks, such as ALU and comparators, to highly complex processors and system-on-chip devices (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11.F10" title="Figure 10 ‣ 11.1 Designs, Test Suites and Benchmarks ‣ 11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">10</span></a>).</p> </div> <figure class="ltx_figure" id="S11.F10"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="494" id="S11.F10.g1" src="x11.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S11.F10.2.1.1" style="font-size:90%;">Figure 10</span>: </span><span class="ltx_text" id="S11.F10.3.2" style="font-size:90%;">Designs used to test ML applications for verification. The size of a box reflects the number of papers which use the design. ALU (arithmetic logic unit), CAAM (Cache Access Arbitration Mechanism), CFX (Complex Fixed Point), CORDIC (coordinate rotation digital computer), CPTR (Comparator), DeMux (Demultiplexer), Ethmac (EthernetMAC), FIFO (First In First Out), FIR (Finite Impulse Response filter, GPU (Graphical Processing Unit), IFU (instruction fetch unit), ISU (Instruction Sequencing Unit), ITC99 (a design from the ITC99 benchmarks), LAI (Look Aside Interface), LC (Lissajous Corrector), LSU (Load Store Unit), LZW (LZW Compression Encoder), MMU (Memory Management Unit), NoC (Network-on-Chip), PCI (Peripheral Component Interconnect includes the Express variant), QMSU (Queue Management and Submission Unit), SCU (Storage Controller Unit), Simple Arithmetic (examples include atan2, squarer and multiplier), SLI (Serial Line Interface), SPI (Serial Peripheral Interface), SPU (Signal Processing Unit), SRI (Shared Resource Interconnection), STREAMPROC (sub-block of Bluetooth protocol adapter), TAP (JTAG Test Access Port), TPU (Tensor Processing Unit), Trust-Hub (a design from the trust-hub benchmarks), VGA (Video Graphics Array).</span></figcaption> </figure> <div class="ltx_para" id="S11.SS1.p2"> <p class="ltx_p" id="S11.SS1.p2.1">The range of applications shows the capability of ML to enhance the verification of different designs and at different levels of design complexity. However, this variety makes comparing research results problematic. It cannot be assumed an ML technique that performs well on one architecture would perform well on another at the same level of complexity or scale to different complexities. For example, it’s uncertain whether the use of a genetic algorithm to verify a RISC-V Ibex core <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib107" title="">107</a>]</cite> would perform equally well verifying a PowerPC core or give similar results verifying a Load Store Unit.</p> </div> <div class="ltx_para" id="S11.SS1.p3"> <p class="ltx_p" id="S11.SS1.p3.1">The challenge of comparing ML techniques is experienced across the machine learning field, leading to the creation of standard benchmarks, environments, and algorithms. Some of these were seen in the surveyed research including supervised techniques from Python’s SciKit-learn<span class="ltx_note ltx_role_footnote" id="footnote2"><sup class="ltx_note_mark">2</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">2</sup><span class="ltx_tag ltx_tag_note">2</span>SciKit-Learn, <a class="ltx_ref ltx_href" href="https://scikit-learn.org/" title="">https://scikit-learn.org/</a></span></span></span>. Open-source device designs and benchmarks have also been used to evaluate the performance of EDA techniques, but their use is not universal (Table <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11.T13" title="Table 13 ‣ 11.1 Designs, Test Suites and Benchmarks ‣ 11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">13</span></a>).</p> </div> <div class="ltx_para" id="S11.SS1.p4"> <p class="ltx_p" id="S11.SS1.p4.1">Approximately a quarter of designs were freely accessible or described in sufficient detail to replicate easily. The remaining three quarters included designs that an expert may be able to approximate but not reproduce exactly, such as designs to carry out simple arithmetic or implement known standards such as Serial Peripheral Interface. Only approximately 4% of designs were obfuscated such that the complexity of the device and its operation could not be determined. A small number of papers use example devices from tutorials, but these are not at the complexity level of industrial designs. Additionally, even when open-source designs are used, including RISC-V, there remains a risk that design revisions result in the version used in a piece of research being unavailable or unknown.</p> </div> <div class="ltx_para" id="S11.SS1.p5"> <p class="ltx_p" id="S11.SS1.p5.1">This lack of standardisation may delay the progress and adoption of machine learning for coverage closure relative to other areas. Research on coverage closure is frequently conducted in collaboration with private companies, where the pursuit of commercial advantage often restricts the availability of designs alongside the research findings. One approach taken by <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib76" title="">76</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib52" title="">52</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib74" title="">74</a>]</cite> which balances the needs for IP protection with open research is to include results from an open source design alongside those from proprietary devices.</p> </div> <figure class="ltx_table" id="S11.T13"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S11.T13.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S11.T13.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S11.T13.2.1.1.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.1.1.1.1"> <span class="ltx_p" id="S11.T13.2.1.1.1.1.1"><span class="ltx_text ltx_font_bold" id="S11.T13.2.1.1.1.1.1.1">Design Repositories</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S11.T13.2.1.1.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.1.1.2.1"> <span class="ltx_p" id="S11.T13.2.1.1.2.1.1"><span class="ltx_text ltx_font_bold" id="S11.T13.2.1.1.2.1.1.1">Used in</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S11.T13.2.2.1"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S11.T13.2.2.1.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.2.1.1.1"> <span class="ltx_p" id="S11.T13.2.2.1.1.1.1">ITC’99</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S11.T13.2.2.1.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.2.1.2.1"> <span class="ltx_p" id="S11.T13.2.2.1.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib9" title="">9</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib111" title="">111</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T13.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S11.T13.2.3.2.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.3.2.1.1"> <span class="ltx_p" id="S11.T13.2.3.2.1.1.1">Trusthub</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S11.T13.2.3.2.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.3.2.2.1"> <span class="ltx_p" id="S11.T13.2.3.2.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib72" title="">72</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T13.2.4.3"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S11.T13.2.4.3.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.4.3.1.1"> <span class="ltx_p" id="S11.T13.2.4.3.1.1.1">Opencores</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S11.T13.2.4.3.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.4.3.2.1"> <span class="ltx_p" id="S11.T13.2.4.3.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib89" title="">89</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib17" title="">17</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib46" title="">46</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib38" title="">38</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib11" title="">11</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib41" title="">41</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T13.2.5.4"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_tt" id="S11.T13.2.5.4.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.5.4.1.1"> <span class="ltx_p" id="S11.T13.2.5.4.1.1.1"><span class="ltx_text ltx_font_bold" id="S11.T13.2.5.4.1.1.1.1">Processors</span></span> </span> </td> <td class="ltx_td ltx_align_middle ltx_border_r ltx_border_tt" id="S11.T13.2.5.4.2" style="width:100.0pt;"></td> </tr> <tr class="ltx_tr" id="S11.T13.2.6.5"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S11.T13.2.6.5.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.6.5.1.1"> <span class="ltx_p" id="S11.T13.2.6.5.1.1.1">RISC-V Ibex</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S11.T13.2.6.5.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.6.5.2.1"> <span class="ltx_p" id="S11.T13.2.6.5.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib76" title="">76</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib52" title="">52</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib107" title="">107</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T13.2.7.6"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S11.T13.2.7.6.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.7.6.1.1"> <span class="ltx_p" id="S11.T13.2.7.6.1.1.1">OpenSPARC</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S11.T13.2.7.6.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.7.6.2.1"> <span class="ltx_p" id="S11.T13.2.7.6.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib47" title="">47</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T13.2.8.7"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S11.T13.2.8.7.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.8.7.1.1"> <span class="ltx_p" id="S11.T13.2.8.7.1.1.1">DRIM-S</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S11.T13.2.8.7.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.8.7.2.1"> <span class="ltx_p" id="S11.T13.2.8.7.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib47" title="">47</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T13.2.9.8"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S11.T13.2.9.8.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.9.8.1.1"> <span class="ltx_p" id="S11.T13.2.9.8.1.1.1">Leon2</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S11.T13.2.9.8.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.9.8.2.1"> <span class="ltx_p" id="S11.T13.2.9.8.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib19" title="">19</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T13.2.10.9"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_tt" id="S11.T13.2.10.9.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.10.9.1.1"> <span class="ltx_p" id="S11.T13.2.10.9.1.1.1"><span class="ltx_text ltx_font_bold" id="S11.T13.2.10.9.1.1.1.1">Tools</span></span> </span> </td> <td class="ltx_td ltx_align_middle ltx_border_r ltx_border_tt" id="S11.T13.2.10.9.2" style="width:100.0pt;"></td> </tr> <tr class="ltx_tr" id="S11.T13.2.11.10"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t" id="S11.T13.2.11.10.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.11.10.1.1"> <span class="ltx_p" id="S11.T13.2.11.10.1.1.1">CoCoTb Python package</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t" id="S11.T13.2.11.10.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.11.10.2.1"> <span class="ltx_p" id="S11.T13.2.11.10.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib98" title="">98</a>]</cite></span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T13.2.12.11"> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S11.T13.2.12.11.1" style="width:150.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.12.11.1.1"> <span class="ltx_p" id="S11.T13.2.12.11.1.1.1">RISC-DV</span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t" id="S11.T13.2.12.11.2" style="width:100.0pt;"> <span class="ltx_inline-block ltx_align_top" id="S11.T13.2.12.11.2.1"> <span class="ltx_p" id="S11.T13.2.12.11.2.1.1"><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib52" title="">52</a>]</cite></span> </span> </td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S11.T13.3.1.1" style="font-size:90%;">Table 13</span>: </span><span class="ltx_text" id="S11.T13.4.2" style="font-size:90%;">Open source platforms used for evaluating machine learning for dynamic verification.</span></figcaption> </figure> </section> <section class="ltx_subsection" id="S11.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">11.2 </span>Measuring Performance</h3> <section class="ltx_subsubsection" id="S11.SS2.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">11.2.1 </span>Metrics</h4> <div class="ltx_para" id="S11.SS2.SSS1.p1"> <p class="ltx_p" id="S11.SS2.SSS1.p1.1">Metrics are used by authors to measure the performance of a machine-learning application. In the sampled literature, six categories of metrics were identified. A description of each is given in Table <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11.T14" title="Table 14 ‣ 11.2.1 Metrics ‣ 11.2 Measuring Performance ‣ 11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">14</span></a>.</p> </div> <figure class="ltx_table" id="S11.T14"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="S11.T14.2"> <thead class="ltx_thead"> <tr class="ltx_tr" id="S11.T14.2.1.1"> <th class="ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t" id="S11.T14.2.1.1.1"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.1.1.1.1"> <span class="ltx_p" id="S11.T14.2.1.1.1.1.1" style="width:73.7pt;"><span class="ltx_text ltx_font_bold" id="S11.T14.2.1.1.1.1.1.1">Group Name</span></span> </span> </th> <th class="ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_border_r ltx_border_t" id="S11.T14.2.1.1.2"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.1.1.2.1"> <span class="ltx_p" id="S11.T14.2.1.1.2.1.1" style="width:260.2pt;"><span class="ltx_text ltx_font_bold" id="S11.T14.2.1.1.2.1.1.1">Description</span></span> </span> </th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="S11.T14.2.2.1"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S11.T14.2.2.1.1"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.2.1.1.1"> <span class="ltx_p" id="S11.T14.2.2.1.1.1.1" style="width:73.7pt;"><span class="ltx_text ltx_font_bold" id="S11.T14.2.2.1.1.1.1.1">Learning <span class="ltx_text ltx_phantom" id="S11.T14.2.2.1.1.1.1.1.1"><span style="visibility:hidden">xxx</span></span> Performance</span></span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S11.T14.2.2.1.2"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.2.1.2.1"> <span class="ltx_p" id="S11.T14.2.2.1.2.1.1" style="width:260.2pt;">Classical ML and statistical metrics that measure how well the ML fits the application. Metrics include: Measure Square Error <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib55" title="">55</a>]</cite>, F and F2 score, recall, accuracy, precision, loss learning rate, number of correct predictions, and false positives.</span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T14.2.3.2"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S11.T14.2.3.2.1"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.3.2.1.1"> <span class="ltx_p" id="S11.T14.2.3.2.1.1.1" style="width:73.7pt;"><span class="ltx_text ltx_font_bold" id="S11.T14.2.3.2.1.1.1.1">Application <span class="ltx_text ltx_phantom" id="S11.T14.2.3.2.1.1.1.1.1"><span style="visibility:hidden">xxx</span></span> Performance</span></span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S11.T14.2.3.2.2"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.3.2.2.1"> <span class="ltx_p" id="S11.T14.2.3.2.2.1.1" style="width:260.2pt;">Metrics common in applications related to coverage closure. The most common measure is coverage as a percentage. Other values include hit rate <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib105" title="">105</a>]</cite>, the number of coverage points hit <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib66" title="">66</a>]</cite> and test diversity <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib83" title="">83</a>]</cite>.</span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T14.2.4.3"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S11.T14.2.4.3.1"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.4.3.1.1"> <span class="ltx_p" id="S11.T14.2.4.3.1.1.1" style="width:73.7pt;"><span class="ltx_text ltx_font_bold" id="S11.T14.2.4.3.1.1.1.1">Stimulus Count</span></span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S11.T14.2.4.3.2"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.4.3.2.1"> <span class="ltx_p" id="S11.T14.2.4.3.2.1.1" style="width:260.2pt;">Used to measure the test resource required. Examples include the number of times the ML updates constraints, the number of instructions or transactions simulated, the number of simulations, and the number of tests.</span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T14.2.5.4"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S11.T14.2.5.4.1"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.5.4.1.1"> <span class="ltx_p" id="S11.T14.2.5.4.1.1.1" style="width:73.7pt;"><span class="ltx_text ltx_font_bold" id="S11.T14.2.5.4.1.1.1.1">Execution Time</span></span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S11.T14.2.5.4.2"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.5.4.2.1"> <span class="ltx_p" id="S11.T14.2.5.4.2.1.1" style="width:260.2pt;">An alternative to counts for measuring test resources. Authors use terms including simulation time, execution time and wall time.</span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T14.2.6.5"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_l ltx_border_r ltx_border_t" id="S11.T14.2.6.5.1"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.6.5.1.1"> <span class="ltx_p" id="S11.T14.2.6.5.1.1.1" style="width:73.7pt;"><span class="ltx_text ltx_font_bold" id="S11.T14.2.6.5.1.1.1.1">ML Overhead</span></span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_r ltx_border_t" id="S11.T14.2.6.5.2"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.6.5.2.1"> <span class="ltx_p" id="S11.T14.2.6.5.2.1.1" style="width:260.2pt;">Measure the additional resources a machine learning method adds to verification. Some research measures this extra cost as total overhead time, others use more granular measures, including the time to train a model, the prediction time, and the time spent generating test patterns that are discarded.</span> </span> </td> </tr> <tr class="ltx_tr" id="S11.T14.2.7.6"> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_l ltx_border_r ltx_border_t" id="S11.T14.2.7.6.1"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.7.6.1.1"> <span class="ltx_p" id="S11.T14.2.7.6.1.1.1" style="width:73.7pt;"><span class="ltx_text ltx_font_bold" id="S11.T14.2.7.6.1.1.1.1">Other</span></span> </span> </td> <td class="ltx_td ltx_align_justify ltx_align_top ltx_border_b ltx_border_r ltx_border_t" id="S11.T14.2.7.6.2"> <span class="ltx_inline-block ltx_align_top" id="S11.T14.2.7.6.2.1"> <span class="ltx_p" id="S11.T14.2.7.6.2.1.1" style="width:260.2pt;">Used for specialist applications, including the number of sampled modules <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib84" title="">84</a>]</cite> and metrics used by a commercial tool <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib53" title="">53</a>]</cite></span> </span> </td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S11.T14.3.1.1" style="font-size:90%;">Table 14</span>: </span><span class="ltx_text" id="S11.T14.4.2" style="font-size:90%;">Metrics used to assess the performance of machine learning methods in dynamic microelectronic verification.</span></figcaption> </figure> <figure class="ltx_figure" id="S11.F11"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="348" id="S11.F11.g1" src="x12.png" width="581"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S11.F11.2.1.1" style="font-size:90%;">Figure 11</span>: </span><span class="ltx_text" id="S11.F11.3.2" style="font-size:90%;">A count of the type of metrics used to assess machine learning for microelectronic design verification. Metrics of the same type are not double-counted within the same piece of material. If a single piece of research material employs more than one metric of the same type, it only increases the count of that metric type by one. Measures relating to task performance were used most frequently.</span></figcaption> </figure> <div class="ltx_para" id="S11.SS2.SSS1.p2"> <p class="ltx_p" id="S11.SS2.SSS1.p2.1">Application performance emerged as the most widely used metric for assessing techniques. In contrast, learning performance and ML overhead were less commonly reported than one might expect in applied machine learning research (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11.F11" title="Figure 11 ‣ 11.2.1 Metrics ‣ 11.2 Measuring Performance ‣ 11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">11</span></a>). An argument is that application performance reflects the real-world benefits of using a technique. However, classic metrics for learning performance provide insights into an algorithm’s ’fit’ to the data and environment. Every learning technique incurs an associated resource cost, making it crucial to understand the cost-to-performance benefit when comparing techniques. For industry practitioners looking to adopt a technique, the tendency of research to report only the benefits hinders meaningful comparison.</p> </div> </section> <section class="ltx_subsubsection" id="S11.SS2.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">11.2.2 </span>Baselines</h4> <figure class="ltx_figure" id="S11.F12"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="349" id="S11.F12.g1" src="x13.png" width="581"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S11.F12.2.1.1" style="font-size:90%;">Figure 12</span>: </span><span class="ltx_text" id="S11.F12.3.2" style="font-size:90%;">A count of the baselines seen in the literature for assessing the performance of a machine learning application for microprocessor verification.</span></figcaption> </figure> <div class="ltx_para" id="S11.SS2.SSS2.p1"> <p class="ltx_p" id="S11.SS2.SSS2.p1.1">Measures of performance, particularly those relating to resources used, are often compared to a baseline. The most commonly used baseline is random-based methods (Figure <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11.F12" title="Figure 12 ‣ 11.2.2 Baselines ‣ 11.2 Measuring Performance ‣ 11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">12</span></a>). These methods include randomising instructions, constraints or pre-generated tests depending on the specific use and application of machine learning. Research that proposes more than one method or evaluates a family of ML methods made comparisons between the techniques <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib72" title="">72</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib26" title="">26</a>]</cite>. A small number of applications used either expert-derived parameters, optimum results, or the ground-truth design as a baseline that an ideal machine learning application could achieve.</p> </div> <div class="ltx_para" id="S11.SS2.SSS2.p2"> <p class="ltx_p" id="S11.SS2.SSS2.p2.1">Using random-based methods as a baseline is advantageous because these methods are the most commonly used in industry and supported by existing simulation-based workflows. Random also acts as a “lowest common denominator” to circumvent the time and complexity of replicating ML methods proposed by other authors. Other sections of this review highlight the lack of openly available information, data sets and designs. In the absence of being able to replicate work, random-based methods are a means to compare performance between different applications of ML. However, caution is needed because performance vs random does not measure how well a technique generalises. The comparative studies demonstrate that different ML methods perform differently for the same application. Therefore, a method that performs well against random in one application may not perform well in another. This makes the insight gained from research that compares ML methods valuable.</p> </div> </section> </section> </section> <section class="ltx_section" id="S12" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">12 </span>Challenges and Opportunities</h2> <div class="ltx_para" id="S12.p1"> <p class="ltx_p" id="S12.p1.1">The surveyed material presents a rich and varied set of machine learning techniques and applications for verifying electronic designs. The number of publications on this topic has increased and showcases successes for EDA practitioners to use or build upon. However, trends were seen that hinder progress: </p> <ul class="ltx_itemize" id="S12.I1"> <li class="ltx_item" id="S12.I1.ix1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">–</span> <div class="ltx_para" id="S12.I1.ix1.p1"> <p class="ltx_p" id="S12.I1.ix1.p1.1">A lack of standard benchmarks, withholding code and data, and obfuscating work undertaken with private companies make it difficult to replicate results and measure progress.</p> </div> </li> <li class="ltx_item" id="S12.I1.ix2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">–</span> <div class="ltx_para" id="S12.I1.ix2.p1"> <p class="ltx_p" id="S12.I1.ix2.p1.1">Techniques are evaluated on simple designs without comparisons to other well-established and effective methods other than random.</p> </div> </li> <li class="ltx_item" id="S12.I1.ix3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">–</span> <div class="ltx_para" id="S12.I1.ix3.p1"> <p class="ltx_p" id="S12.I1.ix3.p1.1">Research rarely explores whether a technique will generalise beyond the application tested or scale to real-world systems.</p> </div> </li> <li class="ltx_item" id="S12.I1.ix4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">–</span> <div class="ltx_para" id="S12.I1.ix4.p1"> <p class="ltx_p" id="S12.I1.ix4.p1.1">It is rare to see work justify the choice of machine learning technique and how it is applied.</p> </div> </li> <li class="ltx_item" id="S12.I1.ix5" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">–</span> <div class="ltx_para" id="S12.I1.ix5.p1"> <p class="ltx_p" id="S12.I1.ix5.p1.1">Research is confined to a tool or ML type, and it is rare to see an exploration of alternative methods. If comparisons between techniques are made, these tend to be within the same family of techniques.</p> </div> </li> <li class="ltx_item" id="S12.I1.ix6" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">–</span> <div class="ltx_para" id="S12.I1.ix6.p1"> <p class="ltx_p" id="S12.I1.ix6.p1.1">The criteria for assessing the success of a technique are confined to a single metric and do not capture the criteria for real-world adoption.</p> </div> </li> <li class="ltx_item" id="S12.I1.ix7" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">–</span> <div class="ltx_para" id="S12.I1.ix7.p1"> <p class="ltx_p" id="S12.I1.ix7.p1.1">Research treats verification as a one-shot problem, whereas in industry it is a rolling process throughout development.</p> </div> </li> </ul> </div> <div class="ltx_para" id="S12.p2"> <p class="ltx_p" id="S12.p2.1">These trends create problems of generalisation, replication and assessment. This section discusses the challenges these trends create and the opportunities for progress.</p> </div> <section class="ltx_subsection" id="S12.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">12.1 </span>Existing Industry Practice</h3> <div class="ltx_para" id="S12.SS1.p1"> <p class="ltx_p" id="S12.SS1.p1.1">A tenancy was seen for research to treat EDA verification as an academic problem in which the performance of a particular technique is the only measure of success. In real-world use, EDA verification is a tried and tested industrial process. The challenge for research is to account for this incumbent process and the ease by which a technique can be implemented. The qualities in Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S5.SS3" title="5.3 Qualities of a Test Bench ‣ 5 Use Cases, Benefits and Desirable Qualities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">5.3</span></a> highlight a range of criteria, which is one step towards appraising techniques in the context of real-world use. Research that provides interfaces between learning methods and existing test bench designs and generalises between verification environments is also valuable for real-world adoption.</p> </div> <div class="ltx_para" id="S12.SS1.p2"> <p class="ltx_p" id="S12.SS1.p2.1">Dynamic-based verification of electronic hardware creates a large amount of labelled data. This data is generated over time on a design experiencing frequent incremental changes. Changing ground-truth relationships caused by these design revisions and the availability of new data create opportunities for research on machine-learning techniques designed for dynamic environments. Research was seen that used classical analysis and statistics in this design environment. For measuring the difference between two versions of a design to inform testing and classical statics to exploit the volume of data generated by typical verification progress. However, there are research opportunities that use machine learning with the design changes and large volumes of data seen in industrial development, particularly techniques that are useable at the start of a project and improve over time, such as hybrid techniques.</p> </div> </section> <section class="ltx_subsection" id="S12.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">12.2 </span>Similarities with Test-Based Software Verification</h3> <div class="ltx_para" id="S12.SS2.p1"> <p class="ltx_p" id="S12.SS2.p1.1">Testing software and hardware designs are fundamentally similar tasks; both disciplines aim to establish the correct operation of a function relative to a specification by applying inputs and monitoring the output. However, it is rare to find research that translates between the software and hardware testing domains. Despite the two domains appearing to operate in isolation, many of the trends identified were also identified in a recent survey for machine learning in software testing <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib34" title="">34</a>]</cite>. Specifically, overuse of simple examples, lack of standardised evaluation criteria, unavailable code and data, and research that does not investigate whether techniques will scale to real-world systems, justify the choice of technique or compare alternatives. Given the similarities in the domains, there is an opportunity to coordinate research efforts.</p> </div> <div class="ltx_para" id="S12.SS2.p2"> <p class="ltx_p" id="S12.SS2.p2.1">An example of a technique translated from software to hardware testing is fuzzing <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib16" title="">16</a>]</cite>. It has been researched for verifying applications of RTL on FPGAs <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib63" title="">63</a>]</cite>, and implemented based on existing tools used in software testing <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib85" title="">85</a>]</cite>. Fuzzing is a technique that was first proposed for software testing and has seen real-world adoption by leading companies including Microsoft<span class="ltx_note ltx_role_footnote" id="footnote3"><sup class="ltx_note_mark">3</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">3</sup><span class="ltx_tag ltx_tag_note">3</span>Microsoft, “microsoft/onefuzz”, <a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/microsoft/onefuzz" title="">https://github.com/microsoft/onefuzz</a></span></span></span> and Google<span class="ltx_note ltx_role_footnote" id="footnote4"><sup class="ltx_note_mark">4</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">4</sup><span class="ltx_tag ltx_tag_note">4</span>Google, “google/clusterfuzz”, <a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/google/clusterfuzz" title="">https://github.com/google/clusterfuzz</a></span></span></span>. The method has similarities to constrained random and GA approaches, that were a subject of research and use in hardware verification before fuzzing was proposed. Current research does not directly compare fuzzing with constrained random and ML techniques, so it is unknown if it is more efficient for hitting hard-to-hit points. However, the advantages of fuzzing are the low setup cost, simple operation and improving performance over time.</p> </div> <div class="ltx_para" id="S12.SS2.p3"> <p class="ltx_p" id="S12.SS2.p3.1">Research in software testing not only introduces new techniques but also offers EDA practitioners valuable insights into methods less prevalent in the hardware domain. For example, while reinforcement learning has been extensively explored for testing sequentially driven software, particularly GUIs <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib34" title="">34</a>]</cite>, its application in micro-electronic verification remains limited to basic problems. The software domain could also inspire innovative uses of machine learning in hardware verification. This review highlights that machine learning applications in hardware verification are predominantly focused on coverage-related use cases (Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S5" title="5 Use Cases, Benefits and Desirable Qualities ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">5</span></a>). In contrast, a recent review of ML in software testing revealed a similar focus on coverage but also identified more material on enhancing the effectiveness and efficiency of existing methods than is currently seen in the hardware domain <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib34" title="">34</a>, Section 4.3]</cite></p> </div> <div class="ltx_para" id="S12.SS2.p4"> <p class="ltx_p" id="S12.SS2.p4.1">Overall, greater coordination between research in software and hardware testing presents opportunities for knowledge transfer and synthesis. This can increase the number of applications and advance the use of machine learning for dynamic-based verification.</p> </div> </section> <section class="ltx_subsection" id="S12.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">12.3 </span>Evaluating the Strengths and Weaknesses of ML Techniques</h3> <div class="ltx_para" id="S12.SS3.p1"> <p class="ltx_p" id="S12.SS3.p1.1">The only example seen of research that compared two different types of ML techniques was in <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib78" title="">78</a>]</cite>, where a reinforcement learning (RL) technique was compared to an existing genetic algorithm. No research was found comparing supervised techniques with RL (or Evolutionary Algorithm (EA)) methods. This gap presents an opportunity for future research to examine the relative strengths of different types of ML techniques, particularly for coverage closure in relation to their use of training data.</p> </div> <div class="ltx_para" id="S12.SS3.p2"> <p class="ltx_p" id="S12.SS3.p2.1">Supervised methods trained offline often used data acquired through other means, such as random stimulus <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib104" title="">104</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib60" title="">60</a>, <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib41" title="">41</a>]</cite>. Additionally, it has been shown that random stimulus outperformed RL for low coverage percentages, negating its benefit over supervised methods at the start of learning. The open question is whether RL or supervised techniques are more efficient overall at reaching the hard-to-hit coverage points. Specifically, does the greater control an RL or EA method have to explore the space at the start of learning enable it to reach coverage closure with fewer simulations, or is the often randomly created dataset for supervised methods, which learns offline and cannot influence their own training data, just as good? To address these questions, it is recommended that future research:</p> <ul class="ltx_itemize" id="S12.I2"> <li class="ltx_item" id="S12.I2.ix1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S12.I2.ix1.p1"> <p class="ltx_p" id="S12.I2.ix1.p1.1"><span class="ltx_text ltx_font_bold" id="S12.I2.ix1.p1.1.1">Conduct comparative studies:</span> Perform direct comparisons between supervised, RL, and EA methods across various benchmarks to identify their strengths and weaknesses in different scenarios.</p> </div> </li> <li class="ltx_item" id="S12.I2.ix2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S12.I2.ix2.p1"> <p class="ltx_p" id="S12.I2.ix2.p1.1"><span class="ltx_text ltx_font_bold" id="S12.I2.ix2.p1.1.1">Analyse training data utilisation:</span> Investigate how the source and quality of training data impact the performance of each ML technique, particularly in achieving coverage closure.</p> </div> </li> <li class="ltx_item" id="S12.I2.ix3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S12.I2.ix3.p1"> <p class="ltx_p" id="S12.I2.ix3.p1.1"><span class="ltx_text ltx_font_bold" id="S12.I2.ix3.p1.1.1">Evaluate efficiency:</span> Measure the efficiency of each technique in terms of the number of simulations required to reach high coverage, considering both initial learning phases and long-term performance.</p> </div> </li> <li class="ltx_item" id="S12.I2.ix4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S12.I2.ix4.p1"> <p class="ltx_p" id="S12.I2.ix4.p1.1"><span class="ltx_text ltx_font_bold" id="S12.I2.ix4.p1.1.1">Explore hybrid approaches:</span> Examine the potential benefits of combining supervised and RL/EA methods to leverage the strengths of both approaches</p> </div> </li> </ul> </div> </section> <section class="ltx_subsection" id="S12.SS4"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">12.4 </span>Use of Open Source Designs and Datasets</h3> <div class="ltx_para" id="S12.SS4.p1"> <p class="ltx_p" id="S12.SS4.p1.1">The range of applications, benchmarks, and metrics used to assess ML techniques can makes it challenging to compare techniques (Section <a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#S11" title="11 Evaluation of Machine Learning in Dynamic Verification ‣ Review of Machine Learning for Micro-Electronic Design Verification"><span class="ltx_text ltx_ref_tag">11</span></a>). Also, those wishing to apply a technique in a different application would be unable to easily establish the differences between the tested environment and their own. Greater use of open source designs and production of common data sets are potential solutions.</p> </div> <div class="ltx_para" id="S12.SS4.p2"> <p class="ltx_p" id="S12.SS4.p2.1">Benchmarking machine learning verification techniques on open source designs enables others to replicate the work and compare the performance of techniques. Some of the surveyed works already use open source designs. To enable meaningful benchmarks, open source coverage models, verification environments and standardised test procedures are also needed. Taking inspiration from the wider field of machine learning, a similar need for standardised testing environments led to the development of OpenAI Gym <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib15" title="">15</a>]</cite> in the reinforcement learning community.</p> </div> <div class="ltx_para" id="S12.SS4.p3"> <p class="ltx_p" id="S12.SS4.p3.1">Data is central to most machine learning techniques. One of the present difficulties in hardware verification is that acquiring data requires expertise in running test benches. This is a specialist skill that includes knowledge of SystemVerilog, scoreboards, monitors, and coverage definition; skills not necessarily possessed by machine learning experts. Again, taking inspiration from the wider machine learning community datasets such as ImageNet <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib24" title="">24</a>]</cite> provided the platform for significant breakthroughs in the use of machine learning for image classification<span class="ltx_note ltx_role_footnote" id="footnote5"><sup class="ltx_note_mark">5</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">5</sup><span class="ltx_tag ltx_tag_note">5</span>Ksenia Se, “The Recipe for an AI Revolution: How ImageNet, AlexNet and GPUs Changed AI Forever”, <a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.turingpost.com/p/cvhistory6" title="">https://www.turingpost.com/p/cvhistory6</a></span></span></span>. The need for large-scale, open datasets was also one of the recommendations of a recent survey into the use of machine learning from a verification industry perspective <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2503.11687v1#bib.bib110" title="">110</a>]</cite>.</p> </div> <div class="ltx_para" id="S12.SS4.p4"> <p class="ltx_p" id="S12.SS4.p4.1">Open source designs, including RISC-V<span class="ltx_note ltx_role_footnote" id="footnote6"><sup class="ltx_note_mark">6</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">6</sup><span class="ltx_tag ltx_tag_note">6</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://riscv.org/" title="">https://riscv.org/</a></span></span></span>, have matured to the point where they are used in commercial products and openly supported by companies including Thales<span class="ltx_note ltx_role_footnote" id="footnote7"><sup class="ltx_note_mark">7</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">7</sup><span class="ltx_tag ltx_tag_note">7</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.thalesgroup.com/en/group/journalist/press-release/thales-joins-risc-v-foundation-help-secure-open-source" title="">https://www.thalesgroup.com/en/group/journalist/press-release/thales-joins-risc-v-foundation-help-secure-open-source</a></span></span></span> and Western Digital<span class="ltx_note ltx_role_footnote" id="footnote8"><sup class="ltx_note_mark">8</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">8</sup><span class="ltx_tag ltx_tag_note">8</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://blog.westerndigital.com/risc-v-swerv-core-open-source/" title="">https://blog.westerndigital.com/risc-v-swerv-core-open-source/</a></span></span></span>. There is an opportunity for commercial companies to produce datasets, benchmark environments and metrics for these open source designs and challenge the machine learning community to find high performing, commercially viable, machine learning techniques to verify them. This would enable industry to drive research in a direction that is relevant and commercially beneficial.</p> </div> </section> <section class="ltx_subsection" id="S12.SS5"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">12.5 </span>The Prevalence of Open Source Designs in Commerical Products</h3> <div class="ltx_para" id="S12.SS5.p1"> <p class="ltx_p" id="S12.SS5.p1.1">The increasing maturity of open-source designs of processor cores raises the possibility of their use by electronic design companies unaccustomed to the verification needs of core design. For reference, ARM cores are subject to many hours of simulation-based testing running on high-performance clusters. A typical company using an open-source design does not possess the computational resources, expertise, or access to the EDA tools required to achieve similar levels of verification. Therefore, a need and opportunity exist for open research that can be used by small electronic design houses to verify their applications based on open-source core designs.</p> </div> </section> </section> <section class="ltx_section" id="S13" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">13 </span>Challenges for Future Research</h2> <div class="ltx_para" id="S13.p1"> <p class="ltx_p" id="S13.p1.1">The results of this review highlight the difficulties of applying machine learning to the verification of microelectronic devices in a real-world project. There are many examples of successful applications of machine learning, but also many configurations of elements that affect the learning. These elements include abstraction level of both the input and output spaces of the ML model, what the machine learning controls, whether the ML is used to target a single coverage hole or many holes, the hyper-parameters of the ML models, and more. What this review concludes is that while there are many successful applications of ML for verification, there is very little understanding of why the application was successful. This information is crucial to generalise a technique to different applications. </p> </div> <div class="ltx_para" id="S13.p2"> <p class="ltx_p" id="S13.p2.1">To gain widespread adoption, the use of machine learning techniques for verification could look to the adoption of formal techniques as a case study. Once seen as requiring complex setup and specialist skills, formal techniques are now more accessible to verification engineers. This has been achieved by offering guided workflows to configure and run the tool as a “push button” operation in industrial EDA software suites.</p> </div> <div class="ltx_para" id="S13.p3"> <p class="ltx_p" id="S13.p3.1">In summary, the questions for future research into the use of ML for verification are as follows.</p> <ul class="ltx_itemize" id="S13.I1"> <li class="ltx_item" id="S13.I1.ix1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S13.I1.ix1.p1"> <p class="ltx_p" id="S13.I1.ix1.p1.1">Why does a machine learning technique work for a specific application?</p> </div> </li> <li class="ltx_item" id="S13.I1.ix2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S13.I1.ix2.p1"> <p class="ltx_p" id="S13.I1.ix2.p1.1">How would the technique transfer between different applications?</p> </div> </li> <li class="ltx_item" id="S13.I1.ix3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S13.I1.ix3.p1"> <p class="ltx_p" id="S13.I1.ix3.p1.1">What are the limitations of the technique?</p> </div> </li> <li class="ltx_item" id="S13.I1.ix4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">-</span> <div class="ltx_para" id="S13.I1.ix4.p1"> <p class="ltx_p" id="S13.I1.ix4.p1.1">What domain knowledge, assumptions, and constraints are needed to apply the technique?</p> </div> </li> </ul> </div> </section> <section class="ltx_section" id="S14" lang="en"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">14 </span>Acknowledgments</h2> <div class="ltx_para" id="S14.p1"> <p class="ltx_p" id="S14.p1.1">The authors acknowledge the assistance of Maryam Ghaffari Saadat in the preparation of this review. </p> </div> </section> <section class="ltx_bibliography" id="bib" lang="en"> <h2 class="ltx_title ltx_title_bibliography">References</h2> <ul class="ltx_biblist"> <li class="ltx_bibitem" id="bib.bib1"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Iee [2020]</span> <span class="ltx_bibblock"> “Ieee standard for universal verification methodology language reference manual,” <em class="ltx_emph ltx_font_italic" id="bib.bib1.1.1">IEEE Std 1800.2-2020 (Revision of IEEE Std 1800.2-2017)</em>, pp. 1–458, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib2"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Iee [2024]</span> <span class="ltx_bibblock"> “Ieee standard for systemverilog–unified hardware design, specification, and verification language,” <em class="ltx_emph ltx_font_italic" id="bib.bib2.1.1">IEEE Std 1800-2023 (Revision of IEEE Std 1800-2017)</em>, pp. 1–1354, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib3"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">iee [2013]</span> <span class="ltx_bibblock"> “Iso/iec/ieee international standard - software and systems engineering —software testing —part 1:concepts and definitions,” <em class="ltx_emph ltx_font_italic" id="bib.bib3.1.1">ISO/IEC/IEEE 29119-1:2013(E)</em>, pp. 1–64, 2013. </span> </li> <li class="ltx_bibitem" id="bib.bib4"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">AboelMaged et al. [2021]</span> <span class="ltx_bibblock"> M. AboelMaged, M. Mashaly, and M. A. A. E. Ghany, “Online constraints update using machine learning for accelerating hardware verification,” in <em class="ltx_emph ltx_font_italic" id="bib.bib4.1.1">2021 3rd Novel Intelligent and Leading Emerging Sciences Conference (NILES)</em>, 2021, pp. 113–116. </span> </li> <li class="ltx_bibitem" id="bib.bib5"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Alhaddad et al. [2021]</span> <span class="ltx_bibblock"> M. A. Alhaddad, S. E. M. Hussein, A. G. Helmy, N. R. Nagy, M. Z. M. Ghazy, and A. H. Yousef, “Utilization of machine learning in rtl-gl signals correlation,” in <em class="ltx_emph ltx_font_italic" id="bib.bib5.1.1">2021 8th International Conference on Signal Processing and Integrated Networks (SPIN)</em>, 2021, pp. 732–737. </span> </li> <li class="ltx_bibitem" id="bib.bib6"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Ambalakkat and Nelson [2019]</span> <span class="ltx_bibblock"> S. M. Ambalakkat and E. Nelson, “Simulation runtime optimization of constrained random verification using machine learning algorithms,” in <em class="ltx_emph ltx_font_italic" id="bib.bib6.1.1">DVCon USA</em>, 2019. </span> </li> <li class="ltx_bibitem" id="bib.bib7"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Baras et al. [2011]</span> <span class="ltx_bibblock"> D. Baras, S. Fine, L. Fournier, D. Geiger, and A. Ziv, “Automatic boosting of cross-product coverage using bayesian networks,” <em class="ltx_emph ltx_font_italic" id="bib.bib7.1.1">International Journal on Software Tools for Technology Transfer</em>, vol. 13, pp. 247–261, 2011. </span> </li> <li class="ltx_bibitem" id="bib.bib8"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Bergeron [2003]</span> <span class="ltx_bibblock"> J. Bergeron, <em class="ltx_emph ltx_font_italic" id="bib.bib8.1.1">Writing Testbenches: Functional Verification of HDL Models, Second Edition</em>. Kluwer Academic Publishers, 2003. </span> </li> <li class="ltx_bibitem" id="bib.bib9"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Bernardeschi et al. [2013]</span> <span class="ltx_bibblock"> C. Bernardeschi, L. Cassano, M. G. C. A. Cimino, and A. Domenici, “Gabes: A genetic algorithm based environment for seu testing in sram-fpgas,” <em class="ltx_emph ltx_font_italic" id="bib.bib9.1.1">Journal of Systems Architecture</em>, vol. 59, pp. 1243–1254, 2013. </span> </li> <li class="ltx_bibitem" id="bib.bib10"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Bernardi et al. [2008]</span> <span class="ltx_bibblock"> P. Bernardi, K. Christou, M. Grosso, M. K. Michael, E. Sánchez, and M. S. Reorda, “Exploiting moea to automatically geneate test programs for path-delay faults in microprocessors,” in <em class="ltx_emph ltx_font_italic" id="bib.bib10.1.1">Applications of Evolutionary Computing</em>, M. Giacobini, A. Brabazon, S. Cagnoni, G. A. D. Caro, R. Drechsler, A. Ekárt, A. I. Esparcia-Alcázar, M. Farooq, A. Fink, J. McCormack, M. O’Neill, J. Romero, F. Rothlauf, G. Squillero, A. Şima Uyar, and S. Yang, Eds. Springer Berlin Heidelberg, 2008, pp. 224–234. </span> </li> <li class="ltx_bibitem" id="bib.bib11"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Bhargav et al. [2021]</span> <span class="ltx_bibblock"> H. Bhargav, V. Vs, B. Kumar, and V. Singh, “Enhancing testbench quality via genetic algorithm,” in <em class="ltx_emph ltx_font_italic" id="bib.bib11.1.1">2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS)</em>, 2021, pp. 652–656. </span> </li> <li class="ltx_bibitem" id="bib.bib12"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Bose et al. [2001]</span> <span class="ltx_bibblock"> M. Bose, J. Shin, E. M. Rudnick, T. Dukes, and M. Abadir, “A genetic approach to automatic bias generation for biased random instruction generation,” in <em class="ltx_emph ltx_font_italic" id="bib.bib12.1.1">Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546)</em>, vol. 1, 2001, pp. 442–448 vol. 1. </span> </li> <li class="ltx_bibitem" id="bib.bib13"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Braun et al. [2003]</span> <span class="ltx_bibblock"> M. Braun, W. Rosenstiel, and K.-D. Schubert, “Comparison of bayesian networks and data mining for coverage directed verification category simulation-based verification,” in <em class="ltx_emph ltx_font_italic" id="bib.bib13.1.1">Eighth IEEE International High-Level Design Validation and Test Workshop</em>, 2003, pp. 91–95. </span> </li> <li class="ltx_bibitem" id="bib.bib14"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Braun et al. [2004]</span> <span class="ltx_bibblock"> M. Braun, S. Fine, and A. Ziv, “Enhancing the efficiency of bayesian network based coverage directed test generation,” in <em class="ltx_emph ltx_font_italic" id="bib.bib14.1.1">Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)</em>, 2004, pp. 75–80. </span> </li> <li class="ltx_bibitem" id="bib.bib15"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Brockman et al. [2016]</span> <span class="ltx_bibblock"> G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “Openai gym,” 2016. </span> </li> <li class="ltx_bibitem" id="bib.bib16"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Canakci et al. [2021]</span> <span class="ltx_bibblock"> S. Canakci, L. Delshadtehrani, F. Eris, M. B. Taylor, M. Egele, and A. Joshi, “Directfuzz: Automated test generation for rtl designs using directed graybox fuzzing,” in <em class="ltx_emph ltx_font_italic" id="bib.bib16.1.1">2021 58th ACM/IEEE Design Automation Conference (DAC)</em>, 2021, pp. 529–534. </span> </li> <li class="ltx_bibitem" id="bib.bib17"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Chang et al. [2010]</span> <span class="ltx_bibblock"> P.-H. Chang, D. Drmanac, and L.-C. Wang, “Online selection of effective functional test programs based on novelty detection,” in <em class="ltx_emph ltx_font_italic" id="bib.bib17.1.1">2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)</em>, 2010, pp. 762–769. </span> </li> <li class="ltx_bibitem" id="bib.bib18"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Chen et al. [2012]</span> <span class="ltx_bibblock"> W. Chen, N. Sumikawa, L.-C. Wang, J. Bhadra, X. Feng, and M. S. Abadir, “Novel test detection to improve simulation efficiency: a commercial experiment,” in <em class="ltx_emph ltx_font_italic" id="bib.bib18.1.1">Proceedings of the International Conference on Computer-Aided Design</em>. Association for Computing Machinery, 2012, pp. 101–108. </span> </li> <li class="ltx_bibitem" id="bib.bib19"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Corno et al. [2004a]</span> <span class="ltx_bibblock"> F. Corno, E. Sanchez, M. S. Reorda, and G. Squillero, “Automatic test program generation: a case study,” <em class="ltx_emph ltx_font_italic" id="bib.bib19.1.1">IEEE Design & Test of Computers</em>, vol. 21, pp. 102–109, 2004. </span> </li> <li class="ltx_bibitem" id="bib.bib20"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Corno et al. [2004b]</span> <span class="ltx_bibblock"> F. Corno, E. Sanchez, M. S. Reorda, and G. Squillero, “Code generation for functional validation of pipelined microprocessors,” <em class="ltx_emph ltx_font_italic" id="bib.bib20.1.1">Journal of Electronic Testing</em>, vol. 20, pp. 269–278, 2004. </span> </li> <li class="ltx_bibitem" id="bib.bib21"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Cristescu and Bob [2021]</span> <span class="ltx_bibblock"> M.-C. Cristescu and C. Bob, “Flexible framework for stimuli redundancy reduction in functional verification using artificial neural networks,” in <em class="ltx_emph ltx_font_italic" id="bib.bib21.1.1">2021 International Symposium on Signals, Circuits and Systems (ISSCS)</em>. IEEE, 7 2021, pp. 1–4. </span> </li> <li class="ltx_bibitem" id="bib.bib22"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Danciu and Dinu [2022]</span> <span class="ltx_bibblock"> G. M. Danciu and A. Dinu, “Coverage fulfillment automation in hardware functional verification using genetic algorithms,” <em class="ltx_emph ltx_font_italic" id="bib.bib22.1.1">Applied Sciences</em>, vol. 12, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib23"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Das et al. [2024]</span> <span class="ltx_bibblock"> S. Das, H. Patel, C. Karfa, K. Bellamkonda, R. Reddy, D. Puri, A. Jain, A. Sur, and P. Prajapati, “Rtl simulation acceleration with machine learning models,” in <em class="ltx_emph ltx_font_italic" id="bib.bib23.1.1">2024 25th International Symposium on Quality Electronic Design (ISQED)</em>, 2024, pp. 1–7. </span> </li> <li class="ltx_bibitem" id="bib.bib24"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Deng et al. [2009]</span> <span class="ltx_bibblock"> J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in <em class="ltx_emph ltx_font_italic" id="bib.bib24.1.1">2009 IEEE Conference on Computer Vision and Pattern Recognition</em>, 2009, pp. 248–255. </span> </li> <li class="ltx_bibitem" id="bib.bib25"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Dimitrakopoulos et al. [2023]</span> <span class="ltx_bibblock"> G. Dimitrakopoulos, E. Kallitsounakis, Z. Takakis, A. Stefanidis, and C. Nicopoulos, “Multi-armed bandits for autonomous test application in risc-v processor verification,” in <em class="ltx_emph ltx_font_italic" id="bib.bib25.1.1">2023 12th International Conference on Modern Circuits and Systems Technologies (MOCAST)</em>, 2023, pp. 1–5. </span> </li> <li class="ltx_bibitem" id="bib.bib26"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Dinu et al. [2021]</span> <span class="ltx_bibblock"> A. Dinu, G. M. Danciu, and Ștefan Gheorghe, “Level up in verification: learning from functional snapshots,” in <em class="ltx_emph ltx_font_italic" id="bib.bib26.1.1">2021 16th International Conference on Engineering of Modern Electric Systems (EMES)</em>, 2021, pp. 1–4. </span> </li> <li class="ltx_bibitem" id="bib.bib27"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Elver and Nagarajan [2016]</span> <span class="ltx_bibblock"> M. Elver and V. Nagarajan, “Mcversi: A test generation framework for fast memory consistency verification in simulation,” in <em class="ltx_emph ltx_font_italic" id="bib.bib27.1.1">2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)</em>, 2016, pp. 618–630. </span> </li> <li class="ltx_bibitem" id="bib.bib28"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Fajcik et al. [2017]</span> <span class="ltx_bibblock"> M. Fajcik, P. Smrz, and M. Zachariasova, “Automation of processor verification using recurrent neural networks,” in <em class="ltx_emph ltx_font_italic" id="bib.bib28.1.1">2017 18th International Workshop on Microprocessor and SOC Test and Verification (MTV)</em>, 2017, pp. 15–20. </span> </li> <li class="ltx_bibitem" id="bib.bib29"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Farkash et al. [2014]</span> <span class="ltx_bibblock"> M. Farkash, B. Hickerson, and M. Behm, “Coverage learned targeted validation for incremental hw changes,” in <em class="ltx_emph ltx_font_italic" id="bib.bib29.1.1">2014 51st ACM/EDAC/IEEE Design Automation Conference (DAC)</em>, 2014, pp. 1–6. </span> </li> <li class="ltx_bibitem" id="bib.bib30"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Farkash et al. [2015]</span> <span class="ltx_bibblock"> M. Farkash, B. Hickerson, and B. Samynathan, “Mining coverage data for test set coverage efficiency,” in <em class="ltx_emph ltx_font_italic" id="bib.bib30.1.1">Design and Verification Conference, DVCON 2015</em>, 2015. </span> </li> <li class="ltx_bibitem" id="bib.bib31"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Fine and Ziv [2003a]</span> <span class="ltx_bibblock"> S. Fine and A. Ziv, “Enhancing the control and efficiency of the covering process,” in <em class="ltx_emph ltx_font_italic" id="bib.bib31.1.1">Eighth IEEE International High-Level Design Validation and Test Workshop</em>, 2003, pp. 96–101. </span> </li> <li class="ltx_bibitem" id="bib.bib32"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Fine et al. [2006]</span> <span class="ltx_bibblock"> S. Fine, A. Freund, I. Jaeger, Y. Mansour, Y. Naveh, and A. Ziv, “Harnessing machine learning to improve the success rate of stimuli generation,” <em class="ltx_emph ltx_font_italic" id="bib.bib32.1.1">IEEE Transactions on Computers</em>, vol. 55, pp. 1344–1355, 2006. </span> </li> <li class="ltx_bibitem" id="bib.bib33"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Fine and Ziv [2003b]</span> <span class="ltx_bibblock"> S. Fine and A. Ziv, “Coverage directed test generation for functional verification using bayesian networks,” in <em class="ltx_emph ltx_font_italic" id="bib.bib33.1.1">Proceedings of the 40th Annual Design Automation Conference</em>. Association for Computing Machinery, 2003, pp. 286–291. </span> </li> <li class="ltx_bibitem" id="bib.bib34"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Fontes and Gay [2023]</span> <span class="ltx_bibblock"> A. Fontes and G. Gay, “The integration of machine learning into automated test generation: A systematic mapping study,” <em class="ltx_emph ltx_font_italic" id="bib.bib34.1.1">Software Testing, Verification and Reliability</em>, vol. 33, p. e1845, 6 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib35"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Foster [2022]</span> <span class="ltx_bibblock"> H. Foster, “2022 wilson research group ic/asic functional verification trends,” Siemens Digital Industries Software, Tech. Rep., 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib36"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Francisco et al. [2020]</span> <span class="ltx_bibblock"> L. Francisco, T. Lagare, A. Jain, S. Chaudhary, M. Kulkarni, D. Sardana, W. R. Davis, and P. Franzon, “Design rule checking with a cnn based feature extractor,” in <em class="ltx_emph ltx_font_italic" id="bib.bib36.1.1">2020 ACM/IEEE 2nd Workshop on Machine Learning for CAD (MLCAD)</em>, 2020, pp. 9–14. </span> </li> <li class="ltx_bibitem" id="bib.bib37"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Gad et al. [2021]</span> <span class="ltx_bibblock"> M. Gad, M. Aboelmaged, M. Mashaly, and M. A. A. el Ghany, “Efficient sequence generation for hardware verification using machine learning,” in <em class="ltx_emph ltx_font_italic" id="bib.bib37.1.1">2021 28th IEEE International Conference on Electronics, Circuits, and Systems (ICECS)</em>, 2021, pp. 1–5. </span> </li> <li class="ltx_bibitem" id="bib.bib38"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Gadde et al. [2024]</span> <span class="ltx_bibblock"> D. N. Gadde, T. Nalapat, A. Kumar, D. Lettnin, W. Kunz, and S. Simon, “Efficient stimuli generation using reinforcement learning in design verification,” 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib39"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Gal et al. [2020a]</span> <span class="ltx_bibblock"> R. Gal, E. Haber, and A. Ziv, “Using dnns and smart sampling for coverage closure acceleration,” in <em class="ltx_emph ltx_font_italic" id="bib.bib39.1.1">2020 ACM/IEEE 2nd Workshop on Machine Learning for CAD (MLCAD)</em>, 2020, pp. 15–20. </span> </li> <li class="ltx_bibitem" id="bib.bib40"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Gal et al. [2020b]</span> <span class="ltx_bibblock"> R. Gal, G. Simchoni, and A. Ziv, “Using machine learning clustering to find large coverage holes,” in <em class="ltx_emph ltx_font_italic" id="bib.bib40.1.1">2020 ACM/IEEE 2nd Workshop on Machine Learning for CAD (MLCAD)</em>, 2020, pp. 139–144. </span> </li> <li class="ltx_bibitem" id="bib.bib41"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Ghany and Ismail [2021]</span> <span class="ltx_bibblock"> M. A. A. E. Ghany and K. A. Ismail, “Speed up functional coverage closure of cordic designs using machine learning models,” in <em class="ltx_emph ltx_font_italic" id="bib.bib41.1.1">2021 International Conference on Microelectronics (ICM)</em>, 2021, pp. 91–95. </span> </li> <li class="ltx_bibitem" id="bib.bib42"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Gogri et al. [2020]</span> <span class="ltx_bibblock"> S. Gogri, J. Hu, A. Tyagi, M. Quinn, S. Ramachandran, F. Batool, and A. Jagadeesh, “Machine learning-guided stimulus generation for functional verification,” in <em class="ltx_emph ltx_font_italic" id="bib.bib42.1.1">Proceedings of the Design and Verification Conference (DVCON-USA), Virtual Conference</em>, 2020, pp. 2–5. </span> </li> <li class="ltx_bibitem" id="bib.bib43"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Gogri et al. [2022]</span> <span class="ltx_bibblock"> S. Gogri, A. Tyagi, M. Quinn, and J. Hu, “Transaction level stimulus optimization in functional verification using machine learning predictors,” in <em class="ltx_emph ltx_font_italic" id="bib.bib43.1.1">2022 23rd International Symposium on Quality Electronic Design (ISQED)</em>, 2022, pp. 71–76. </span> </li> <li class="ltx_bibitem" id="bib.bib44"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Guo et al. [2011]</span> <span class="ltx_bibblock"> L. Guo, J. Yi, L. Zhang, X. Wang, and D. Tong, “Cga: Combining cluster analysis with genetic algorithm for regression suite reduction of microprocessors,” in <em class="ltx_emph ltx_font_italic" id="bib.bib44.1.1">2011 IEEE International SOC Conference</em>, 2011, pp. 207–212. </span> </li> <li class="ltx_bibitem" id="bib.bib45"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Guo et al. [2010]</span> <span class="ltx_bibblock"> Q. Guo, T. Chen, H. Shen, Y. Chen, and W. Hu, “On-the-fly reduction of stimuli for functional verification,” in <em class="ltx_emph ltx_font_italic" id="bib.bib45.1.1">2010 19th IEEE Asian Test Symposium</em>, 2010, pp. 448–454. </span> </li> <li class="ltx_bibitem" id="bib.bib46"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Guo et al. [2014]</span> <span class="ltx_bibblock"> Q. Guo, T. Chen, Y. Chen, R. Wang, H. Chen, W. Hu, and G. Chen, “Pre-silicon bug forecast,” <em class="ltx_emph ltx_font_italic" id="bib.bib46.1.1">IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems</em>, vol. 33, pp. 451–463, 2014. </span> </li> <li class="ltx_bibitem" id="bib.bib47"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Guzey et al. [2008]</span> <span class="ltx_bibblock"> O. Guzey, L.-C. Wang, J. Levitt, and H. Foster, “Functional test selection based on unsupervised support vector analysis,” in <em class="ltx_emph ltx_font_italic" id="bib.bib47.1.1">Proceedings of the 45th Annual Design Automation Conference</em>. Association for Computing Machinery, 2008, pp. 262–267. </span> </li> <li class="ltx_bibitem" id="bib.bib48"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Habibi et al. [2006]</span> <span class="ltx_bibblock"> A. Habibi, S. Tahar, A. Samarah, D. Li, and O. A. Mohamed, “Efficient assertion based verification using tlm,” in <em class="ltx_emph ltx_font_italic" id="bib.bib48.1.1">Proceedings of the Design Automation & Test in Europe Conference</em>, vol. 1, 2006, pp. 1–6. </span> </li> <li class="ltx_bibitem" id="bib.bib49"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Halim et al. [2022]</span> <span class="ltx_bibblock"> Y. M. Halim, K. A. Ismail, M. A. A. E. Ghany, S. A. Ibrahim, and Y. M. Halim, “Reinforcement-learning based method for accelerating functional coverage closure of traffic light controller dynamic digital design,” in <em class="ltx_emph ltx_font_italic" id="bib.bib49.1.1">2022 32nd International Conference on Computer Theory and Applications (ICCTA)</em>, 2022, pp. 44–50. </span> </li> <li class="ltx_bibitem" id="bib.bib50"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Hu et al. [2016]</span> <span class="ltx_bibblock"> J. Hu, T. Li, and S. Li, “Equivalence checking between slm and rtl using machine learning techniques,” in <em class="ltx_emph ltx_font_italic" id="bib.bib50.1.1">2016 17th International Symposium on Quality Electronic Design (ISQED)</em>, 2016, pp. 129–134. </span> </li> <li class="ltx_bibitem" id="bib.bib51"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Huang et al. [2021]</span> <span class="ltx_bibblock"> G. Huang, J. Hu, Y. He, J. Liu, M. Ma, Z. Shen, J. Wu, Y. Xu, H. Zhang, K. Zhong, X. Ning, Y. Ma, H. Yang, B. Yu, H. Yang, and Y. Wang, “Machine learning for electronic design automation: A survey,” <em class="ltx_emph ltx_font_italic" id="bib.bib51.1.1">ACM Trans. Des. Autom. Electron. Syst.</em>, vol. 26, 6 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib52"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Huang et al. [2022]</span> <span class="ltx_bibblock"> Q. Huang, H. Shojaei, F. Zyda, A. Nazi, S. Vasudevan, S. Chatterjee, and R. Ho, “Test parameter tuning with blackbox optimization: A simple yet effective way to improve coverage,” in <em class="ltx_emph ltx_font_italic" id="bib.bib52.1.1">Proceedings of the design and verification conference and exhibition US (DVCon)</em>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib53"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Ikram and Ellis [2017]</span> <span class="ltx_bibblock"> S. Ikram and J. Ellis, “Dynamic regression suite generation using coverage-based clustering,” in <em class="ltx_emph ltx_font_italic" id="bib.bib53.1.1">Proceedings of the design and verification conference and exhibition US (DVCon)</em>, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib54"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Ioannides and Eder [2012]</span> <span class="ltx_bibblock"> C. Ioannides and K. I. Eder, “Coverage-directed test generation automated by machine learning – a review,” <em class="ltx_emph ltx_font_italic" id="bib.bib54.1.1">ACM Trans. Des. Autom. Electron. Syst.</em>, vol. 17, 1 2012. </span> </li> <li class="ltx_bibitem" id="bib.bib55"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Ismail and Ghany [2021b]</span> <span class="ltx_bibblock"> K. A. Ismail and M. A. A. E. Ghany, “High performance machine learning models for functional verification of hardware designs,” in <em class="ltx_emph ltx_font_italic" id="bib.bib55.1.1">2021 3rd Novel Intelligent and Leading Emerging Sciences Conference (NILES)</em>, 2021, pp. 15–18. </span> </li> <li class="ltx_bibitem" id="bib.bib56"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Ismail and Ghany [2021a]</span> <span class="ltx_bibblock"> K. A. Ismail and M. A. A. E. Ghany, “Survey on machine learning algorithms enhancing the functional verification process,” <em class="ltx_emph ltx_font_italic" id="bib.bib56.1.1">Electronics</em>, vol. 10, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib57"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Jang et al. [2022]</span> <span class="ltx_bibblock"> H. Jang, S. Yim, S. Choi, S. B. Choi, and A. Cheng, “Machine learning based verification planning methodology using design and verification data,” in <em class="ltx_emph ltx_font_italic" id="bib.bib57.1.1">Design and Verification Conf.(DVCON)</em>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib58"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Jayasena and Mishra [2024]</span> <span class="ltx_bibblock"> A. Jayasena and P. Mishra, “Directed test generation for hardware validation: A survey,” <em class="ltx_emph ltx_font_italic" id="bib.bib58.1.1">ACM Comput. Surv.</em>, vol. 56, 1 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib59"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Kamath et al. [2012]</span> <span class="ltx_bibblock"> V. Kamath, W. Chen, N. Sumikawa, and L.-C. Wang, “Functional test content optimization for peak-power validation — an experimental study,” in <em class="ltx_emph ltx_font_italic" id="bib.bib59.1.1">2012 IEEE International Test Conference</em>, 2012, pp. 1–10. </span> </li> <li class="ltx_bibitem" id="bib.bib60"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Katz et al. [2011]</span> <span class="ltx_bibblock"> Y. Katz, M. Rimon, A. Ziv, and G. Shaked, “Learning microarchitectural behaviors to improve stimuli generation quality,” in <em class="ltx_emph ltx_font_italic" id="bib.bib60.1.1">Proceedings of the 48th Design Automation Conference</em>. Association for Computing Machinery, 2011, pp. 848–853. </span> </li> <li class="ltx_bibitem" id="bib.bib61"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Krishna et al. [2023]</span> <span class="ltx_bibblock"> N. Krishna, J. P. Shah, and S. J., “Improving the functional coverage closure of network-on-chip using genetic algorithm,” in <em class="ltx_emph ltx_font_italic" id="bib.bib61.1.1">2023 IEEE International Symposium on Circuits and Systems (ISCAS)</em>, 2023, pp. 1–5. </span> </li> <li class="ltx_bibitem" id="bib.bib62"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Kumar et al. [2023]</span> <span class="ltx_bibblock"> B. Kumar, G. Parthasarathy, S. Nanda, and S. Rajakumar, “Optimizing constrained random verification with ml and bayesian estimation,” in <em class="ltx_emph ltx_font_italic" id="bib.bib62.1.1">2023 ACM/IEEE 5th Workshop on Machine Learning for CAD (MLCAD)</em>, 2023, pp. 1–6. </span> </li> <li class="ltx_bibitem" id="bib.bib63"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Laeufer et al. [2018]</span> <span class="ltx_bibblock"> K. Laeufer, J. Koenig, D. Kim, J. Bachrach, and K. Sen, “Rfuzz: Coverage-directed fuzz testing of rtl on fpgas,” in <em class="ltx_emph ltx_font_italic" id="bib.bib63.1.1">2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)</em>, 2018, pp. 1–8. </span> </li> <li class="ltx_bibitem" id="bib.bib64"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Li et al. [2023]</span> <span class="ltx_bibblock"> T. Li, M. Shi, H. Zou, and W. Qu, “Towards accelerating assertion coverage using surrogate logic models,” in <em class="ltx_emph ltx_font_italic" id="bib.bib64.1.1">2023 IEEE International Symposium on Circuits and Systems (ISCAS)</em>, 2023, pp. 1–5. </span> </li> <li class="ltx_bibitem" id="bib.bib65"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Li et al. [2024]</span> <span class="ltx_bibblock"> Z. Li, T. Li, C. Liu, L. Wang, C. Liu, Y. Guo, and W. Qu, “Towards evaluating seu type soft error effects with graph attention network,” in <em class="ltx_emph ltx_font_italic" id="bib.bib65.1.1">2024 2nd International Symposium of Electronics Design Automation (ISEDA)</em>, 2024, pp. 241–246. </span> </li> <li class="ltx_bibitem" id="bib.bib66"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Liang et al. [2023]</span> <span class="ltx_bibblock"> R. Liang, N. Pinckney, Y. Chai, H. Ren, and B. Khailany, “Late breaking results: Test selection for rtl coverage by unsupervised learning from fast functional simulation,” in <em class="ltx_emph ltx_font_italic" id="bib.bib66.1.1">2023 60th ACM/IEEE Design Automation Conference (DAC)</em>, 2023, pp. 1–2. </span> </li> <li class="ltx_bibitem" id="bib.bib67"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Liu et al. [2012]</span> <span class="ltx_bibblock"> L. Liu, D. Sheridan, W. Tuohy, and S. Vasudevan, “A technique for test coverage closure using goldmine,” <em class="ltx_emph ltx_font_italic" id="bib.bib67.1.1">IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems</em>, vol. 31, pp. 790–803, 2012. </span> </li> <li class="ltx_bibitem" id="bib.bib68"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Mandouh et al. [2018]</span> <span class="ltx_bibblock"> E. E. Mandouh, A. Salem, M. Amer, and A. G. Wassal, “Cross-product functional coverage analysis using machine learning clustering techniques,” in <em class="ltx_emph ltx_font_italic" id="bib.bib68.1.1">2018 13th International Conference on Design & Technology of Integrated Systems In Nanoscale Era (DTIS)</em>, 2018, pp. 1–2. </span> </li> <li class="ltx_bibitem" id="bib.bib69"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Masamba et al. [2022a]</span> <span class="ltx_bibblock"> N. Masamba, K. Eder, and T. Blackmore, “Hybrid intelligent testing in simulation-based verification,” in <em class="ltx_emph ltx_font_italic" id="bib.bib69.1.1">2022 IEEE International Conference On Artificial Intelligence Testing (AITest)</em>. IEEE, 8 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib70"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Masamba et al. [2022b]</span> <span class="ltx_bibblock"> N. Masamba, K. Eder, and T. Blackmore, “Supervised learning for coverage-directed test selection in simulation-based verification,” in <em class="ltx_emph ltx_font_italic" id="bib.bib70.1.1">2022 IEEE International Conference On Artificial Intelligence Testing (AITest)</em>. IEEE, 8 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib71"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Molina and Cadenas [2007]</span> <span class="ltx_bibblock"> A. Molina and O. Cadenas, “Functional verification: Approaches and challenges,” <em class="ltx_emph ltx_font_italic" id="bib.bib71.1.1">Latin American applied research</em>, vol. 37, pp. 65–69, 2007. </span> </li> <li class="ltx_bibitem" id="bib.bib72"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Mondol et al. [2024]</span> <span class="ltx_bibblock"> N. N. Mondol, A. Vafei, K. Z. Azar, F. Farahmandi, and M. Tehranipoor, “Rl-tpg: Automated pre-silicon security verification through reinforcement learning-based test pattern generation,” in <em class="ltx_emph ltx_font_italic" id="bib.bib72.1.1">2024 Design, Automation & Test in Europe Conference & Exhibition (DATE)</em>, 2024, pp. 1–6. </span> </li> <li class="ltx_bibitem" id="bib.bib73"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Myeongwhan [2022]</span> <span class="ltx_bibblock"> H. H. A. J. K. Y. K. D. K. M. J. Myeongwhan, “Pss action sequence modeling using machine learning,” in <em class="ltx_emph ltx_font_italic" id="bib.bib73.1.1">Proceedings of the design and verification conference and exhibition US (DVCon)</em>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib74"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Nazi et al. [2022]</span> <span class="ltx_bibblock"> A. Nazi, Q. Huang, H. Shojaei, H. A. Esfeden, A. Mirhosseini, and R. Ho, “Adaptive test generation for fast functional coverage closure,” <em class="ltx_emph ltx_font_italic" id="bib.bib74.1.1">DVCON USA</em>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib75"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Ohana [2023]</span> <span class="ltx_bibblock"> E. Ohana, “Closing functional coverage with deep reinforcement learning: A compression encoder example,” in <em class="ltx_emph ltx_font_italic" id="bib.bib75.1.1">Proceedings of the DVCon US 2023 Conference in San Jose, California. https://dvcon-proceedings. org</em>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib76"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Parthasarathy et al. [2022]</span> <span class="ltx_bibblock"> G. Parthasarathy, A. Rushdi, P. Choudhary, S. Nanda, M. Evans, H. Gunasekara, and S. Rajakumar, “Rtl regression test selection using machine learning,” in <em class="ltx_emph ltx_font_italic" id="bib.bib76.1.1">2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)</em>, 2022, pp. 281–287. </span> </li> <li class="ltx_bibitem" id="bib.bib77"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Peter et al. [2007]</span> <span class="ltx_bibblock"> Peter, H. H.-W. E. Kerstin, and Flach, “Towards automating simulation-based design verification using ilp,” in <em class="ltx_emph ltx_font_italic" id="bib.bib77.1.1">Inductive Logic Programming</em>, Ramon, T.-N. A. M. Stephen, and Otero, Eds. Springer Berlin Heidelberg, 2007, pp. 154–168. </span> </li> <li class="ltx_bibitem" id="bib.bib78"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Pfeifer et al. [2020]</span> <span class="ltx_bibblock"> N. Pfeifer, B. V. Zimpel, G. A. G. Andrade, and L. C. V. dos Santos, “A reinforcement learning approach to directed test generation for shared memory verification,” in <em class="ltx_emph ltx_font_italic" id="bib.bib78.1.1">2020 Design, Automation & Test in Europe Conference & Exhibition (DATE)</em>, 2020, pp. 538–543. </span> </li> <li class="ltx_bibitem" id="bib.bib79"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Phogtat and Hamilton [2024]</span> <span class="ltx_bibblock"> Y. Phogtat and P. Hamilton, “ml: Shrinking the verification volume using machine learning,” in <em class="ltx_emph ltx_font_italic" id="bib.bib79.1.1">DVCon</em>, vol. 126. Springer Science and Business Media Deutschland GmbH, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib80"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Piziali [2007]</span> <span class="ltx_bibblock"> A. Piziali, <em class="ltx_emph ltx_font_italic" id="bib.bib80.1.1">Functional verification coverage measurement and analysis</em>. Springer Science & Business Media, 2007. </span> </li> <li class="ltx_bibitem" id="bib.bib81"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Qiu et al. [2024]</span> <span class="ltx_bibblock"> R. Qiu, G. L. Zhang, R. Drechsler, U. Schlichtmann, and B. Li, “Autobench: Automatic testbench generation and evaluation using llms for hdl design,” in <em class="ltx_emph ltx_font_italic" id="bib.bib81.1.1">Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD</em>. Association for Computing Machinery, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib82"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Ravotto et al. [2008]</span> <span class="ltx_bibblock"> D. Ravotto, E. Sanchez, M. Schillaci, and G. Squillero, “An evolutionary methodology for test generation for peripheral cores via dynamic fsm extraction,” in <em class="ltx_emph ltx_font_italic" id="bib.bib82.1.1">Applications of Evolutionary Computing</em>, M. Giacobini, A. Brabazon, S. Cagnoni, G. A. D. Caro, R. Drechsler, A. Ekárt, A. I. Esparcia-Alcázar, M. Farooq, A. Fink, J. McCormack, M. O’Neill, J. Romero, F. Rothlauf, G. Squillero, A. Şima Uyar, and S. Yang, Eds. Springer Berlin Heidelberg, 2008, pp. 214–223. </span> </li> <li class="ltx_bibitem" id="bib.bib83"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Romero et al. [2009]</span> <span class="ltx_bibblock"> E. Romero, R. Acosta, M. Strum, and W. J. Chau, “Support vector machine coverage driven verification for communication cores,” in <em class="ltx_emph ltx_font_italic" id="bib.bib83.1.1">2009 17th IFIP International Conference on Very Large Scale Integration (VLSI-SoC)</em>, 2009, pp. 147–152. </span> </li> <li class="ltx_bibitem" id="bib.bib84"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Roy et al. [2018]</span> <span class="ltx_bibblock"> R. Roy, C. Duvedi, S. Godil, and M. Williams, “Deep predictive coverage collection,” in <em class="ltx_emph ltx_font_italic" id="bib.bib84.1.1">Proceedings of the design and verification conference and exhibition US (DVCon)</em>, 2018. </span> </li> <li class="ltx_bibitem" id="bib.bib85"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Ruep and Große [2022]</span> <span class="ltx_bibblock"> K. Ruep and D. Große, “Spinalfuzz: Coverage-guided fuzzing for spinalhdl designs,” in <em class="ltx_emph ltx_font_italic" id="bib.bib85.1.1">2022 IEEE European Test Symposium (ETS)</em>, 2022, pp. 1–4. </span> </li> <li class="ltx_bibitem" id="bib.bib86"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Samarah et al. [2006]</span> <span class="ltx_bibblock"> A. Samarah, A. Habibi, S. Tahar, and N. Kharma, “Automated coverage directed test generation using a cell-based genetic algorithm,” in <em class="ltx_emph ltx_font_italic" id="bib.bib86.1.1">2006 IEEE International High Level Design Validation and Test Workshop</em>, 2006, pp. 19–26. </span> </li> <li class="ltx_bibitem" id="bib.bib87"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Shen et al. [2019]</span> <span class="ltx_bibblock"> C.-H. Shen, A. C.-W. Liang, C. C.-H. Hsu, and C. H.-P. Wen, “Fae: Autoencoder-based failure binning of rtl designs for verification and debugging,” in <em class="ltx_emph ltx_font_italic" id="bib.bib87.1.1">2019 IEEE International Test Conference (ITC)</em>, 2019, pp. 1–10. </span> </li> <li class="ltx_bibitem" id="bib.bib88"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Shen et al. [2008]</span> <span class="ltx_bibblock"> H. Shen, W. Wei, Y. Chen, B. Chen, and Q. Guo, “Coverage directed test generation: Godson experience,” in <em class="ltx_emph ltx_font_italic" id="bib.bib88.1.1">2008 17th Asian Test Symposium</em>, 2008, pp. 321–326. </span> </li> <li class="ltx_bibitem" id="bib.bib89"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Shen and Fu [2005]</span> <span class="ltx_bibblock"> H. Shen and Y. Fu, “Priority directed test generation for functional verification using neural networks,” in <em class="ltx_emph ltx_font_italic" id="bib.bib89.1.1">Proceedings of the ASP-DAC 2005. Asia and South Pacific Design Automation Conference, 2005.</em>, vol. 2, 2005, pp. 1052–1055 Vol. 2. </span> </li> <li class="ltx_bibitem" id="bib.bib90"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Shibu et al. [2021]</span> <span class="ltx_bibblock"> A. J. Shibu, S. S, S. N, and P. Kumar, “Verlpy: Python library for verification of digital designs with reinforcement learning,” 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib91"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Silver et al. [2017]</span> <span class="ltx_bibblock"> D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of go without human knowledge,” <em class="ltx_emph ltx_font_italic" id="bib.bib91.1.1">Nature</em>, vol. 550, pp. 354–359, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib92"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Simková and Kotásek [2015]</span> <span class="ltx_bibblock"> M. Simková and Z. Kotásek, “Automation and optimization of coverage-driven verification,” in <em class="ltx_emph ltx_font_italic" id="bib.bib92.1.1">2015 Euromicro Conference on Digital System Design</em>, 2015, pp. 87–94. </span> </li> <li class="ltx_bibitem" id="bib.bib93"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Smith et al. [1997]</span> <span class="ltx_bibblock"> J. E. Smith, M. Bartley, and T. C. Fogarty, “Microprocessor design verification by two-phase evolution of variable length tests,” in <em class="ltx_emph ltx_font_italic" id="bib.bib93.1.1">Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC ’97)</em>, 1997, pp. 453–458. </span> </li> <li class="ltx_bibitem" id="bib.bib94"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Sokorac [2017]</span> <span class="ltx_bibblock"> S. Sokorac, “Optimizing random test constraints using machine learning algorithms,” in <em class="ltx_emph ltx_font_italic" id="bib.bib94.1.1">Proceedings of the design and verification conference and exhibition US (DVCon)</em>, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib95"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Stefan and Alexandru [2021]</span> <span class="ltx_bibblock"> G. Stefan and D. Alexandru, “Controlling hardware design behavior using python based machine learning algorithms,” in <em class="ltx_emph ltx_font_italic" id="bib.bib95.1.1">2021 16th International Conference on Engineering of Modern Electric Systems (EMES)</em>, 2021, pp. 1–4. </span> </li> <li class="ltx_bibitem" id="bib.bib96"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Stracquadanio et al. [2024]</span> <span class="ltx_bibblock"> G. Stracquadanio, S. Medya, S. Quer, and D. Pal, “Veribug: An attention-based framework for bug localization in hardware designs,” in <em class="ltx_emph ltx_font_italic" id="bib.bib96.1.1">2024 Design, Automation & Test in Europe Conference & Exhibition (DATE)</em>, 2024, pp. 1–2. </span> </li> <li class="ltx_bibitem" id="bib.bib97"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Thamarai et al. [2010]</span> <span class="ltx_bibblock"> S. M. Thamarai, K. Kuppusamy, and T. Meyyappan, “Fault based test minimization using genetic algorithm for two stage combinational circuits,” in <em class="ltx_emph ltx_font_italic" id="bib.bib97.1.1">2010 INTERNATIONAL CONFERENCE ON COMMUNICATION CONTROL AND COMPUTING TECHNOLOGIES</em>, 2010, pp. 461–464. </span> </li> <li class="ltx_bibitem" id="bib.bib98"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Tweehuysen et al. [2023]</span> <span class="ltx_bibblock"> S. L. Tweehuysen, G. L. A. Adriaans, and M. Gomony, “Stimuli generation for ic design verification using reinforcement learning with an actor-critic model,” in <em class="ltx_emph ltx_font_italic" id="bib.bib98.1.1">2023 IEEE European Test Symposium (ETS)</em>, 2023, pp. 1–4. </span> </li> <li class="ltx_bibitem" id="bib.bib99"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Vangara et al. [2021]</span> <span class="ltx_bibblock"> R. K. M. Vangara, B. Kakani, and S. Vuddanti, “An analytical study on machine learning approaches for simulation-based verification,” in <em class="ltx_emph ltx_font_italic" id="bib.bib99.1.1">2021 IEEE International Conference on Intelligent Systems, Smart and Green Technologies (ICISSGT)</em>, 2021, pp. 197–201. </span> </li> <li class="ltx_bibitem" id="bib.bib100"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Wagner et al. [2005]</span> <span class="ltx_bibblock"> I. Wagner, V. Bertacco, and T. Austin, “Stresstest: an automatic approach to test generation via activity monitors,” in <em class="ltx_emph ltx_font_italic" id="bib.bib100.1.1">Proceedings of the 42nd Annual Design Automation Conference</em>. Association for Computing Machinery, 2005, pp. 783–788. </span> </li> <li class="ltx_bibitem" id="bib.bib101"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Wagner et al. [2007]</span> <span class="ltx_bibblock"> I. Wagner, V. Bertacco, and T. Austin, “Microprocessor verification via feedback-adjusted markov models,” <em class="ltx_emph ltx_font_italic" id="bib.bib101.1.1">IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems</em>, vol. 26, pp. 1126–1138, 2007. </span> </li> <li class="ltx_bibitem" id="bib.bib102"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Wahba et al. [2019]</span> <span class="ltx_bibblock"> A. Wahba, J. Hohnerlein, and F. Rahman, “Expediting design bug discovery in regressions of x86 processors using machine learning,” in <em class="ltx_emph ltx_font_italic" id="bib.bib102.1.1">2019 20th International Workshop on Microprocessor/SoC Test, Security and Verification (MTV)</em>, 2019, pp. 1–6. </span> </li> <li class="ltx_bibitem" id="bib.bib103"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Wang et al. [2022]</span> <span class="ltx_bibblock"> C.-A. Wang, C.-H. Tseng, C.-C. Tsai, T.-Y. Lee, Y.-H. Chen, C.-H. Yeh, C.-S. Yeh, and C.-T. Lai, “Two-stage framework for corner case stimuli generation using transformer and reinforcement learning,” in <em class="ltx_emph ltx_font_italic" id="bib.bib103.1.1">Proceedings of the Design and Verification Conference and Exhibition, US (DVCon)</em>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib104"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Wang et al. [2018]</span> <span class="ltx_bibblock"> F. Wang, H. Zhu, P. Popli, Y. Xiao, P. Bodgan, and S. Nazarian, “Accelerating coverage directed test generation for functional verification: A neural network-based framework,” in <em class="ltx_emph ltx_font_italic" id="bib.bib104.1.1">Proceedings of the 2018 Great Lakes Symposium on VLSI</em>. Association for Computing Machinery, 2018, pp. 207–212. </span> </li> <li class="ltx_bibitem" id="bib.bib105"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">wen Hsueh and Eder [2006]</span> <span class="ltx_bibblock"> H. wen Hsueh and K. Eder, “Test directive generation for functional coverage closure using inductive logic programming,” in <em class="ltx_emph ltx_font_italic" id="bib.bib105.1.1">2006 IEEE International High Level Design Validation and Test Workshop</em>, 2006, pp. 11–18. </span> </li> <li class="ltx_bibitem" id="bib.bib106"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Wu et al. [2024]</span> <span class="ltx_bibblock"> N. Wu, Y. Li, H. Yang, H. Chen, S. Dai, C. Hao, C. Yu, and Y. Xie, “Survey of machine learning for software-assisted hardware design verification: Past, present, and prospect,” <em class="ltx_emph ltx_font_italic" id="bib.bib106.1.1">ACM Trans. Des. Autom. Electron. Syst.</em>, vol. 29, 6 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib107"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Xia et al. [2024]</span> <span class="ltx_bibblock"> S. Xia, Y. Zhang, Z. Wang, R. Ding, H. Cui, and X. Chen, “An approach to enhance the efficiency of risc-v verification using intelligent algorithms,” in <em class="ltx_emph ltx_font_italic" id="bib.bib107.1.1">2024 IEEE 7th International Conference on Electronic Information and Communication Technology (ICEICT)</em>, 2024, pp. 419–423. </span> </li> <li class="ltx_bibitem" id="bib.bib108"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Yang et al. [2013]</span> <span class="ltx_bibblock"> Y.-C. Yang, C.-Y. Wang, C.-Y. Huang, and Y.-C. Chen, “Pattern generation for mutation analysis using genetic algorithms,” in <em class="ltx_emph ltx_font_italic" id="bib.bib108.1.1">2013 IEEE International Symposium on Circuits and Systems (ISCAS)</em>, 2013, pp. 2545–2548. </span> </li> <li class="ltx_bibitem" id="bib.bib109"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Yasaei et al. [2021]</span> <span class="ltx_bibblock"> R. Yasaei, S.-Y. Yu, and M. A. A. Faruque, “Gnn4tj: Graph neural networks for hardware trojan detection at register transfer level,” in <em class="ltx_emph ltx_font_italic" id="bib.bib109.1.1">2021 Design, Automation & Test in Europe Conference & Exhibition (DATE)</em>, 2021, pp. 1504–1509. </span> </li> <li class="ltx_bibitem" id="bib.bib110"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Yu et al. [2023]</span> <span class="ltx_bibblock"> D. Yu, H. Foster, and T. Fitzpatrick, “A survey of machine learning applications in functional verification,” <em class="ltx_emph ltx_font_italic" id="bib.bib110.1.1">DVCon US</em>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib111"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Yu et al. [2002]</span> <span class="ltx_bibblock"> X. Yu, A. Fin, F. Fummi, and E. M. Rudnick, “A genetic testing framework for digital integrated circuits,” in <em class="ltx_emph ltx_font_italic" id="bib.bib111.1.1">14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings.</em>, 2002, pp. 521–526. </span> </li> <li class="ltx_bibitem" id="bib.bib112"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Yuan et al. [2006]</span> <span class="ltx_bibblock"> J. Yuan, C. Pixley, and A. Aziz, <em class="ltx_emph ltx_font_italic" id="bib.bib112.1.1">Constraint-Based Verification</em>. Springer-Verlag, 2006. </span> </li> <li class="ltx_bibitem" id="bib.bib113"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Zachariáová et al. [2016]</span> <span class="ltx_bibblock"> M. Zachariáová, Z. Kotásek, and M. Kekelyová-Beleová, “Regression test suites optimization for application-specific instruction-set processors and their use for dependability analysis,” in <em class="ltx_emph ltx_font_italic" id="bib.bib113.1.1">2016 Euromicro Conference on Digital System Design (DSD)</em>, 2016, pp. 380–387. </span> </li> <li class="ltx_bibitem" id="bib.bib114"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Zennaro et al. [2018]</span> <span class="ltx_bibblock"> E. Zennaro, L. Servadei, K. Devarajegowda, and W. Ecker, “A machine learning approach for area prediction of hardware designs from abstract specifications,” in <em class="ltx_emph ltx_font_italic" id="bib.bib114.1.1">2018 21st Euromicro Conference on Digital System Design (DSD)</em>, 2018, pp. 413–420. </span> </li> <li class="ltx_bibitem" id="bib.bib115"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Zheng et al. [2023]</span> <span class="ltx_bibblock"> X. Zheng, K. Eder, and T. Blackmore, “Using neural networks for novelty-based test selection to accelerate functional coverage closure,” 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib116"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Zheng et al. [2024]</span> <span class="ltx_bibblock"> X. Zheng, T. Blackmore, J. Buckingham, and K. Eder, “Detecting stimuli with novel temporal patterns to accelerate functional coverage closure,” 2024. </span> </li> </ul> </section> </article> </div> <footer class="ltx_page_footer"> <div class="ltx_page_logo">Generated on Wed Mar 5 15:05:08 2025 by <a class="ltx_LaTeXML_logo" href="http://dlmf.nist.gov/LaTeXML/"><span style="letter-spacing:-0.2em; margin-right:0.1em;">L<span class="ltx_font_smallcaps" style="position:relative; bottom:2.2pt;">a</span>T<span class="ltx_font_smallcaps" style="font-size:120%;position:relative; bottom:-0.2ex;">e</span></span><span style="font-size:90%; position:relative; bottom:-0.2ex;">XML</span><img alt="Mascot Sammy" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAsAAAAOCAYAAAD5YeaVAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9wKExQZLWTEaOUAAAAddEVYdENvbW1lbnQAQ3JlYXRlZCB3aXRoIFRoZSBHSU1Q72QlbgAAAdpJREFUKM9tkL+L2nAARz9fPZNCKFapUn8kyI0e4iRHSR1Kb8ng0lJw6FYHFwv2LwhOpcWxTjeUunYqOmqd6hEoRDhtDWdA8ApRYsSUCDHNt5ul13vz4w0vWCgUnnEc975arX6ORqN3VqtVZbfbTQC4uEHANM3jSqXymFI6yWazP2KxWAXAL9zCUa1Wy2tXVxheKA9YNoR8Pt+aTqe4FVVVvz05O6MBhqUIBGk8Hn8HAOVy+T+XLJfLS4ZhTiRJgqIoVBRFIoric47jPnmeB1mW/9rr9ZpSSn3Lsmir1fJZlqWlUonKsvwWwD8ymc/nXwVBeLjf7xEKhdBut9Hr9WgmkyGEkJwsy5eHG5vN5g0AKIoCAEgkEkin0wQAfN9/cXPdheu6P33fBwB4ngcAcByHJpPJl+fn54mD3Gg0NrquXxeLRQAAwzAYj8cwTZPwPH9/sVg8PXweDAauqqr2cDjEer1GJBLBZDJBs9mE4zjwfZ85lAGg2+06hmGgXq+j3+/DsixYlgVN03a9Xu8jgCNCyIegIAgx13Vfd7vdu+FweG8YRkjXdWy329+dTgeSJD3ieZ7RNO0VAXAPwDEAO5VKndi2fWrb9jWl9Esul6PZbDY9Go1OZ7PZ9z/lyuD3OozU2wAAAABJRU5ErkJggg=="/></a> </div></footer> </div> </body> </html>