Abstract: Electronic Design Automation (EDA) plays a fundamental role in modern circuit design, and logic synthesis is a crucial step in the EDA flow. With the advent of artificial intelligence (AI), various AI techniques have been applied to logic synthesis to enhance the efficiency and quality of circuit designs. This talk provides an overview of the state-of-the-art AI applications in EDA logic synthesis, covering machine learning, deep learning, reinforcement learning and large language models (LLMs), etc. We discuss the principles, methodologies, and use cases of each AI approach, along with their strengths, limitations, and recent advancements. Moreover, we will try to analyze the impact of AI/LLM for logic synthesis, exploring its potential to revolutionize traditional methods and enable the development of advanced and optimized circuits. We will also try to highlight challenges and suggestions for future research directions, emphasizing the promising prospects of AI/LLM for EDA logic synthesis.
A Novel Structural Choices Generation Method for Logic Restructuring
Presenter: Zhang Hu, Ningbo University
Abstract: Logic restructuring is an efficient method for achieving conversions between different logic representations and optimizing logic network. However, as a mapping-based method, it often comes with structural bias issues. This will result in poor quality of logic restructuring. Therefore, we propose a novel structural choices generation method to address the issue of structural bias in logic restructuring. It matches the substructures of the network with the database of pre-computed optimum structures to obtain equivalent nodes. It can efficiently obtain many high-quality candidate structures and generate choice networks for different logic representations. The experimental results demonstrate that our method effectively improves the quality of results during logic transformation. In logic optimization, our approach achieves a 67.3\% reduction in logic depth for test cases, with a 1.95\% decrease in network size.
On Accelerating Domain-Specific MC-TS with Knowledge Retention and Efficient Parallelization for Logic Optimization
Presenter: Cunqing Lan, Fudan University
Abstract: Recently, demand of higher quality of result (QoR) for logic optimization have spurred numerous studies on generating logic transformation sequence for specific objective. Nevertheless, previous works often suffer from heavy reliance on computing resources or are stuck at local optima. In this work, We propose a logic transformation sequence generator based on a domain-specific Monte Carlo tree search (MC-TS). Our framework propose to utilizes information obtained from synthesis tools as context to guild the tree search. Additionally, we implement the algorithm with knowledge retention and adaptive parallelization for further acceleration. Experiments on open-source benchmarks of various scale show that our framework outperforms the state-of-the-art work, Alphasyn, with an average 20.11\% efficiency improvement while maintaining comparable effectiveness.
ERL-LS: Accelerating the Optimization of Logic Synthesis with Evolutionary Reinforcement Learning
Presenter: Chenyang Lv, Shanghai Jiao Tong University
Abstract: In electronic design automation (EDA), logic synthesis (LS) converts a high-level description of a circuit to a gate-level netlist,generally using a unified heuristic algorithm to optimize different combinational circuits. Synthesized circuits often perform better, such as smaller areas and lower latency. Logic synthesis relies on a series of optimization commands to conduct the optimization, but the complexity of synthesis optimization flow increases exponentially with more commands used. Machine-learning-based methods, especially reinforcement learning (RL), are widely utilized in large-scale (LS) applications for efficiently exploring customized circuit design spaces. For rapid LS, we propose an Evolutionarily scheduled Reinforcement Learning(ERL) framework, which is compatible with various agents adopted in prior RL-based LS works. Owing to the parallel execution on a multi-core processor, it can significantly improve exploration efficiency without losing solution quality. Our experiments show that, on EPFL benchmark and executing with 4-cores, our framework with RL agent in DRiLLS generally achieves 3.37 times speed-up to reach global optimal compared to the corresponding work.
Multiplication Complexity Optimization based on Quantified Boolean Formulas
Presenter: Jun Zhu, Ningbo University
Abstract: In the field of cryptography and security, minimizing the number of AND gates within logic networks is crucial. Such optimization significantly influences the multiplication complexity of circuits. This paper proposes a Quantified Boolean Formulas (QBF) based resynthesis to dynamically construct local circuits from sub-circuits, with the goal of minimizing the number of AND gates and optimizing the number of XOR gate numbers in well-optimized networks. Experimental results on parts of the EPFL benchmarks suite indicate that the proposed method effectively reduces the number of AND gates by 15.51\% and the number of XOR gates by 3.34\%.
Automatic Multi-Parameter Tuning for Logic Synthesis with Reinforcement Learning
Presenter: Zhenghao Cui, Sun Yat-sen University
Abstract: Logic synthesis serves as the intermediate stage between abstract logic and physical implementation. It involves various logic optimization and technology mapping algorithms,which are iteratively applied to the circuit. The multi-parameter tuning for logic synthesis is the process of generating a sequence of logic optimization and technology mapping operators with multiple parameters. The Quality-of-Result (QoR) is significantly impacted by different recursive arrangements of optimization commands and parameter selections of technology mapping operators. Nowadays, the order of calling algorithms is usually determined by heuristics. The heuristic becomes unacceptable if based on a large exploration space and test design. To address this issue, we utilize reinforcement learning (RL) with the proximal policy optimization (PPO) algorithm to train an agent to effectively generate the optimization and mapping sequence. To acquire adequate features to aid decision-making,we utilize the Graph Isomorphic Network (GIN) with edge feature aggregation capability to learn circuit representations and use circuit scalars as state representations for the reinforcement learning agent. To allow the agent to learn from historical operations, we utilize the Long Short-Term Memory (LSTM)to uncover the relationships between different operators within a single sequence. Additionally, we address the issue of selecting the parameter for the technology mapping operator by framing it as a multi-class classification problem and training a classifier to identify the optimal parameter. We evaluated the effectiveness of our model using the EPFL arithmetic benchmark. The result shows that our model achieved an average improvement of 9.93%in area and 13.62% in depth on each test design over the greedy algorithm. Furthermore, we achieve a significant improvement of65.79% in area and 37.79% in delay on the biggest test design. These findings highlight the potential of our approach to enhance logic synthesis and technology mapping for large circuits.
Logic Synthesis Meets Artificial Intelligence
Invited Speaker: Lei Chen, Huawei, Hong Kong