Design communities increasingly argue for digital twins of the development process itself—so AI can optimize resource queues, failure triage, and iteration cadence across teams and tools.
IEEE communities spotlight the hard problems: interpretability, validation/verification (VVUQ), safety, adversarial robustness, and scalable human‑in‑the‑loop decisions.
Panelists could explore practical paths to link design‑centric twins with yield/metrology data—using International Roadmap for Devices and Systems (IRDS) guidance on lithography, yield enhancement, and sustainability—to reduce respins, tighten process windows, and connect verification results to manufacturing controls.
We’ll map where twins belong in 2.5D/3D and chiplet ecosystems (thermal/mechanical co‑simulation, parametric drift tracking) and what organizational changes (data contracts, governance) are required to deliver measurable ROI in 12–18 months.
Digital twins are increasingly proposed as a way to connect verification evidence, fault behavior, manufacturing data, and process learning into a closed optimization loop. For semiconductor teams, that could mean earlier detection of latent reliability risks, faster root-cause analysis, tighter process control, and better alignment between design assumptions and manufacturing reality, which reduces respins and improves yield. This panel examines whether digital twins can deliver measurable ROI and what governance is needed for adoption.
Moderator: Prof. Yeo, Kiat Seng, Singapore University of Technology and Design
Agentic AI and foundation models are rapidly evolving from point assistants into orchestrators of broader design and verification workflows. As these systems move beyond prompt-based assistance toward multi-agent task decomposition, optimization, and closure, they are also reshaping the compute profile of EDA—driving new demands on cloud scalability, heterogeneous infrastructure, and cost-aware scheduling. The next wave of innovation may come not only from better models, but from tighter co-optimization of AI, EDA algorithms, and compute platforms.
Agentic AI is evolving from point assistance into a broader orchestration layer across design, verification, and implementation workflows. At the same time, the growing use of foundation models and the development of multi-agent systems is reshaping the compute profile of EDA, raising new questions about infrastructure scalability, domain specialization, and workflow integration. With perspectives from EDA, cloud infrastructure, academia, and advanced manufacturing, this panel will debate whether the next leap in design productivity will come from better models, tighter coupling to EDA engines, or a new generation of compute-aware design flows.
Will cloud-scale compute become essential for next-generation AI-driven EDA workflows, or will hybrid and on-prem strategies remain the more practical path for most design teams?
Chiplet architectures are moving from concept to deployment, driven by demand for higher performance, faster productization, and heterogeneous integration in AI, HPC, automotive, and advanced edge systems. Interface standards such as UCIe aim to reduce integration friction, but real ecosystem adoption depends on far more than connectivity. Packaging technologies, testability, qualification, manufacturability, and lifecycle management remain critical barriers.
Chiplet-based architectures promise modularity, faster innovation, and new paths to heterogeneous integration, but scaling beyond tightly controlled custom designs remains difficult. While interface standards such as UCIe aim to reduce integration friction, real multi-vendor deployment also depends on packaging technologies, testability, qualification, manufacturability, and lifecycle management. With perspectives spanning open platform architecture, microelectronics test, design-technology co-optimization, and certification, this panel will debate whether chiplets are ready to become a reusable ecosystem, or whether the industry still lacks the workflows, assurance layers, and business alignment needed for true interoperability at scale.