Panels

Panel 01

Digital Twins for Verification & Manufacturing: Closing the Loop From Model to Fab


Why this topic?

Design communities increasingly argue for digital twins of the development process itself—so AI can optimize resource queues, failure triage, and iteration cadence across teams and tools.

IEEE communities spotlight the hard problems: interpretability, validation/verification (VVUQ), safety, adversarial robustness, and scalable human‑in‑the‑loop decisions.

Panelists could explore practical paths to link design‑centric twins with yield/metrology data—using International Roadmap for Devices and Systems (IRDS) guidance on lithography, yield enhancement, and sustainability—to reduce respins, tighten process windows, and connect verification results to manufacturing controls.

We’ll map where twins belong in 2.5D/3D and chiplet ecosystems (thermal/mechanical co‑simulation, parametric drift tracking) and what organizational changes (data contracts, governance) are required to deliver measurable ROI in 12–18 months.


Abstract

Digital twins are increasingly proposed as a way to connect verification evidence, fault behavior, manufacturing data, and process learning into a closed optimization loop. For semiconductor teams, that could mean earlier detection of latent reliability risks, faster root-cause analysis, tighter process control, and better alignment between design assumptions and manufacturing reality, which reduces respins and improves yield. This panel examines whether digital twins can deliver measurable ROI and what governance is needed for adoption.


Key Debate Questions (For vs Against)
  1. Are digital twins a practical solution for advanced packaging and chiplet ecosystems, or an overhyped concept with limited scalability?
  2. Does creating effective digital twins require deeper access to process, metrology, and operational data than what the semiconductor ecosystem is realistically willing to share?
  3. Can digital twins truly reduce verification cycles and improve manufacturability, or will they add cost and complexity without clear benefits?
  4. Should the industry focus first on narrow, high-value twins for failure triage, process control, or packaging, or can broader end-to-end twins across verification and manufacturing be made operational in the near term?

Panelists
  • Prof. Hans-Joachim Wunderlich - University of Stuttgart
  • Prof. Yeo Yee Chia – Deputy Chief Executive (Innovation & Enterprise), A*STAR
  • Dr Prayudi Lianto - Advanced Packaging Development Center, Applied Materials

Panel 02

Agentic AI + Scalable Compute in EDA: From Point Tools to Flow Orchestration


Moderator: Prof. Yeo, Kiat Seng, Singapore University of Technology and Design


Why This Topic?

Agentic AI and foundation models are rapidly evolving from point assistants into orchestrators of broader design and verification workflows. As these systems move beyond prompt-based assistance toward multi-agent task decomposition, optimization, and closure, they are also reshaping the compute profile of EDA—driving new demands on cloud scalability, heterogeneous infrastructure, and cost-aware scheduling. The next wave of innovation may come not only from better models, but from tighter co-optimization of AI, EDA algorithms, and compute platforms.


Abstract

Agentic AI is evolving from point assistance into a broader orchestration layer across design, verification, and implementation workflows. At the same time, the growing use of foundation models and the development of multi-agent systems is reshaping the compute profile of EDA, raising new questions about infrastructure scalability, domain specialization, and workflow integration. With perspectives from EDA, cloud infrastructure, academia, and advanced manufacturing, this panel will debate whether the next leap in design productivity will come from better models, tighter coupling to EDA engines, or a new generation of compute-aware design flows.

Will cloud-scale compute become essential for next-generation AI-driven EDA workflows, or will hybrid and on-prem strategies remain the more practical path for most design teams?


Key Debate Questions (For vs Against)
  1. Will agentic AI deliver scalable end-to-end EDA productivity gains, or will success depend on domain specialization, compute efficiency, and flow integration?
  2. Does agentic AI fundamentally redefine how design teams explore, optimize, and close designs, or is it mainly an incremental acceleration layer on top of existing EDA flows?
  3. Will cloud-scale compute become essential for next-generation AI-driven EDA workflows, or will hybrid and on-prem strategies remain the more practical path for most design teams?
  4. Should the future of AI-driven EDA be built on large general-purpose foundation models, or on smaller domain-specific models tightly coupled to solvers, simulators and signoff engines?
  5. Should EDA vendors expose orchestration frameworks that optimize both tools and compute resources, or will customers demand open infrastructure and model portability?

Panelists
  • Subramani Kengeri – VP and GM, Systems to Materials, Applied Materials
  • Frank Blaimberger – VP, Global Innovation Lead for Digitized Compliance and Connected Systems
  • Don Chan – VP, Research & Development, Cadence Design Systems, Inc.
  • Akhil Bhaskar – Core Services Leader, Asia Pacific & Japan at AWS

Panel 03

Chiplets at Scale: Can Standards, Packaging, and Qualification Enable a True Multi‑Vendor Ecosystem?


Why this topic?

Chiplet architectures are moving from concept to deployment, driven by demand for higher performance, faster productization, and heterogeneous integration in AI, HPC, automotive, and advanced edge systems. Interface standards such as UCIe aim to reduce integration friction, but real ecosystem adoption depends on far more than connectivity. Packaging technologies, testability, qualification, manufacturability, and lifecycle management remain critical barriers.


Abstract

Chiplet-based architectures promise modularity, faster innovation, and new paths to heterogeneous integration, but scaling beyond tightly controlled custom designs remains difficult. While interface standards such as UCIe aim to reduce integration friction, real multi-vendor deployment also depends on packaging technologies, testability, qualification, manufacturability, and lifecycle management. With perspectives spanning open platform architecture, microelectronics test, design-technology co-optimization, and certification, this panel will debate whether chiplets are ready to become a reusable ecosystem, or whether the industry still lacks the workflows, assurance layers, and business alignment needed for true interoperability at scale.


Key Debate Questions (For vs Against)
  1. Can interface standards such as UCIe realistically enable a plug-and-play multi-vendor chiplet ecosystem, or will interoperability still depend on much deeper qualification, certification, and integration work?
  2. Do chiplets require a fundamentally new co-design workflow across architecture, design, packaging, and qualification, or can today’s semiconductor development processes evolve incrementally to support them?
  3. Will advanced packaging approaches such as hybrid bonding accelerate mainstream chiplet adoption, or will yield, test, and reliability concerns limit them to a narrower set of high-value applications?
  4. Can chiplets become truly reusable open platforms, or do packaging, power delivery, thermal behavior, and process variation make most integrations inherently application-specific?
  5. Will test, validation, and reliability assurance become the real bottleneck for chiplet adoption, even if interface standards continue to mature?

Panelists
  • Prof. Luca Benini - Università di Bologna
  • Prof. Krishnendu Chakrabarty - Arizona State University
  • Prof. Puneet Gupta - UCLA