Session Chair: Bei Yu, The Chinese University of Hong Kong
The synergy between AI/ML and EDA
Invited Speaker: Peter Chun, University of Alberta
Abstract: The Machine Learning (ML) is shown to be phenomenal for "prompting" (ChatGPT). But, it has also shown its promise to design computer systems. This interdisciplinary trend requires collaboration and furthermore foster unified methodologies between the ML and industry applications. My talk lists various examples of these attempts and how new features of ML might shine the lights on what we should focus on steming from my experience of running many collaboration projects. The talk will also premiere Dr. Matthew Taylor's vision (a short video clip) how the AI/ML (RL) can enhance the EDA solutions.
LLM for Better Solver and Solver for Better LLM
Invited Speaker: Hui-Ling Zhen, Huawei, Hong Kong
Abstract: Recently, Large Language Models (LLMs) have demonstrated their capability in context understanding, logic reasoning, and answer generation. In this talk, we hope to discuss whether and how to combine LLM and solver as a better combination for better reasoning and good performance. On one hand, to make use of LLMs' capability of code understanding and implementation in software, we propose BetterV, which fine-tunes LLMs on processed domain-specific Verilog datasets and incorporates generative discriminators aiming at guiding the particular design demands. As far as we know, this is the first work which gives a downstream-task-friendly verilog generation, and extensive experiments have demonstrated that BetterV outperforms GPT-4 on the VerilogEval-machine benchmark, and comparing other LLMs, it can lead to the netlist node reduction for synthesis and verification runtime reduction with SAT solving. Meanwhile, we also propose LLM as a helper in debugging. On the other hand, to improve LLM's logic reasoning ability, we propose to introduce a solver as a new layer of the LLM to differentially guide solutions towards satisfiability. Leveraging MaxSAT as a bridge, we define forward and backward transfer gradients, enabling the final model to converge to a satisfied solution or prove unsatisfiability. Experiments show that SoLA can outperform against existing symbolic solvers (including Z3 and Kissat) and tool-learning methods.
A survey of Reinforcement Learning for Electronic Design Automation
Presenter: Xiangru Chen, IMT Mines Ales
Abstract: With the technological evolution of fabrication processes, the complexity of integrated circuit design has increased accordingly. Engineers are considering integrating machine learning techniques into the Electronic Design Automation (EDA) domain to assist in enhancing product performance and minimizing power consumption in the face of Moore's Law limitations. Reinforcement Learning (RL), as a branch of machine learning, has garnered attention in recent years. This paper provides a brief overview of RL in conjunction with EDA technology, outlining in detail the application scenarios of RL within EDA.
Knowledge Transfer for GaN HEMTs Parameter Extraction Based on Hybrid Model
Presenter: Yangbo Wei, Southeast University
Abstract: Gallium Nitride (GaN) High Electron Mobility Transistors (HEMTs) offer advantages such as wide bandgap and high electron mobility. Design of a circuit with GaN requires a highly accurate equivalent circuit model, which requires a computationally expensive parameter extraction process. Despite the success of the optimization-based parameter extraction, they are generally designed to work for individual cases, leading to inferior performance. To resolve this challenge, we harness modern AI techniques to significantly boost this cumbersome process. More specifically, we enhance the conventional optimizationbased method with three novel modifications: steps: (1) datadriven calibration for classic initialization methods with empirical equations; (2) adaptive search space for refining search space with faster searching speed and more accurate solution; (3) extracted parameters embedding using deep kernel learning for higher accuracy. The experimental results show that our proposed method reduces the optimization time by 5× and has a 1.18× accuracy improvement compared to competing methods.
Uncertainty quantitative analysis of MEMS sensors based on physical guided deep learning
Presenter: Yifan Zhang, Southeast University
Abstract: A Physical Guided Deep Learning method has been developed for MEMS uncertainty analysis. By construct a novel loss function for the neural network training, the physical constraints are added to the deep learning model. Thus, the accurate surrogate model of MEMS sensors can be established with fewer samples and the hidden features of complex models are captured by multi-hidden layers. Using a kind of vacuum sensor as the validation case, the proposed algorithm produces highly accurate analysis results and requires only 50% to 70% of the training data needed for traditional deep learning. This technique provides an effective tool for yield analysis and optimal design of MEMS devices.
GOMARL: Global Optimization of Multiplier using Multi-Agent Reinforcement Learning
Presenter: Yi Feng, Southeast University
Abstract: In modern computing, multipliers play a crucial role in enhancing the performance and efficiency of various computational tasks. Due to the extensive design space and high circuit complexity of multipliers, it is challenging to optimize the circuit structure. Previous research tends to optimize only part of the circuit in multiplication, ignoring further global optimization. Recently, reinforcement learning (RL) has shown promise in various fields of electronic design automation(EDA), including digital circuit design. In this paper, inspired by multi-agent reinforcement learning theory, a multi-agent RL(MARL) based framework is proposed, our main work involves the following aspects:(i) for compression tree in multiplier, we design a fine-grained matrix-based representation and a corresponding Q-learning based RL environment; (ii) by combining existing RL model on adder and our model on compression tree, we propose a multi-agent reinforcement learning (MA-RL) based framework, in which two agents cooperate with each other to achieve the overall optimization of multiplier in terms of area and delay. Experimental results show that the multipliers optimized by GOMARL can improve delay by more than 7% and area by more than 5% compared with baseline designs.
The synergy between AI/ML and EDA
Invited Speaker: Peter Chun, University of Alberta