Designing advanced chips, such as those at 5nm, now costs approximately $540 million and requires 864 engineer-days, with projections suggesting costs could approach $1 billion for future nodes. As chip complexity soars, generative AI—powered by large language models (LLMs) and AI agents—is transforming the design process, automating tasks, and accelerating timelines from concept to silicon.
Generative AI leverages machine learning, particularly LLMs, to create content like code, layouts, or testbenches from high-level prompts, unlike rule-based traditional EDA tools. Engineers can describe design goals in natural language, and AI generates RTL code, placement constraints, or verification scripts. For instance, NVIDIA’s ChipNeMo LLM, trained on proprietary design data, assists with GPU architecture queries and code generation, boosting productivity up to 2×, even for junior engineers. This technology promises to streamline manual tasks, enabling faster, more efficient chip development.
Generative AI is reshaping every stage of chip design and verification:
- Architecture and RTL Generation: AI explores architectural options and converts high-level specifications into RTL. Synopsys’ Copilot aims to translate natural-language requirements into RTL, while Cadence’s Cerebrus reduced die area by 5% and power by over 6% in a MediaTek SoC block by optimizing design choices beyond human capability.
- Power/Performance A'srea (PPA) Optimization: AI tools fine-tune layouts for optimal PPA. Siemens’ Aprisa AI achieves 10% better PPA and 3× faster tapeout, enhancing productivity by 10×. Cadence’s Virtuoso Studio and Cerebrus optimize multi-block and chip-package layouts, improving performance and yield across design corners.
- Placement and Routing: AI automates complex layout tasks, enhancing manufacturability and reducing errors. Cadence’s Allegro X AI optimizes PCB and system-level component placement and critical net routing, allowing designers to explore vast design spaces quickly and identify patterns in chiplet or 3D-IC designs.
- Verification and Debug: AI streamlines verification by generating testbenches, assertions, and debugging failures. ChipAgents’ AI agents produce Verilog RTL and testbenches, boosting debug productivity by ~10×. MooresLabAI creates full UVM testbench environments, cutting verification time. Cadence’s Verisium platform uses AI to triage simulation failures, with Samsung and STMicroelectronics reporting faster bug localization. NVIDIA’s ChipNeMo aids engineers by retrieving technical documents and automating bug tracking.
- EDA Productivity Assistants: Conversational AI “copilots” enhance workflows. Synopsys.ai Copilot answers queries like “How do I optimize speed in Fusion Compiler?” by drawing on manuals and design archives. Microsoft and Intel are integrating Copilot for tasks like formal verification, with Intel noting its potential to generate RTL from natural-language specs, streamlining complex designs.
- Apple: Hardware SVP Johny Srouji highlights AI’s potential for significant productivity gains, enabling more design work in less time.
- Intel and AMD: Both collaborate with EDA vendors like Synopsys to leverage AI for high-quality RTL generation, with AMD integrating Synopsys.ai for CPU/GPU designs.
- Microsoft: Engineers co-developed Synopsys.ai Copilot to accelerate verification and reduce time from ideation to design.
EDA vendors are also advancing AI integration. Synopsys.ai includes DSO.ai for design-space exploration and Copilot for assistance. Cadence.AI (JedAI) encompasses Cerebrus, Virtuoso Studio, Verisium, and Allegro X AI, delivering measurable gains like 6% power reduction. Siemens’ Aprisa AI and Calibre Vision AI, showcased at DAC 2025, report 10× productivity and halved signoff times, with Solido adding AI to custom IC flows.
Startups are innovating rapidly. ChipAgents.ai generates RTL and testbenches from plain-language prompts, achieving 10× faster design cycles. Bronco AI automates debug playbooks, while MooresLabAI produces UVM testbenches. Rise Design Automation’s AI-driven high-level synthesis converts C++/SystemC into RTL, speeding up simulation.
Generative AI delivers remarkable results: Siemens reports 10× productivity and 3× faster tapeouts, while ChipAgents achieves 10× RTL and verification throughput. NVIDIA’s ChipNeMo matches larger LLMs with domain-specific fine-tuning. With the industry needing over 1 million new engineers by 2030, AI tools bridge the talent gap by automating routine tasks, empowering experienced engineers to focus on innovation, and accelerating onboarding for novices.
Challenges include ensuring high-quality training data, protecting IP, and verifying AI outputs. General LLMs require custom fine-tuning for chip design, and “black-box” decisions need careful validation. However, optimism prevails. As Synopsys notes, AI enables faster exploration of design spaces, leading to better decisions. Apple emphasizes that AI-driven EDA tools are critical for productivity.
Generative AI is revolutionizing chip design, embedding LLMs and agents into EDA workflows to automate coding, optimize layouts, and enhance verification. By enabling engineers to work smarter and faster, these tools drive innovation and meet tight schedules. As NVIDIA predicts, fine-tuned LLMs will transform every aspect of chip design, paving the way for a future where complex chips are designed and delivered with unprecedented speed and efficiency.