휴먼 스타일 적응형 설계 최적화를 위한 HeaRT 기반 회로 자동화 엔진
📝 Abstract
Conventional AI-driven AMS design automation algorithms remain constrained by their reliance on high-quality datasets to capture underlying circuit behavior, coupled with poor transferability across architectures, and a lack of adaptive mechanisms. This work proposes HeaRT, a foundational reasoning engine for automation loops and a first step toward intelligent, adaptive, human-style design optimization. HeaRT consistently demonstrates reasoning accuracy >97% and Pass@1 performance >98% across our 40-circuit benchmark repository, even as circuit complexity increases, while operating at <0.5x real-time token budget of SOTA baselines. Our experiments show that HeaRT yields >3x faster convergence in both sizing and topology design adaptation tasks across diverse optimization approaches, while preserving prior design intent.
💡 Analysis
Conventional AI-driven AMS design automation algorithms remain constrained by their reliance on high-quality datasets to capture underlying circuit behavior, coupled with poor transferability across architectures, and a lack of adaptive mechanisms. This work proposes HeaRT, a foundational reasoning engine for automation loops and a first step toward intelligent, adaptive, human-style design optimization. HeaRT consistently demonstrates reasoning accuracy >97% and Pass@1 performance >98% across our 40-circuit benchmark repository, even as circuit complexity increases, while operating at <0.5x real-time token budget of SOTA baselines. Our experiments show that HeaRT yields >3x faster convergence in both sizing and topology design adaptation tasks across diverse optimization approaches, while preserving prior design intent.
📄 Content
Analog and mixed-signal (AMS) circuit design remains challenging to automate due to its fully custom flows, intricate trade-offs in deep sub-micron technologies, and the high cost of re-optimization when specifications change. Conventional Bayesian Optimization (BO) methods [1][2][3] offer strong sample efficiency but fail to scale effectively in complex, high-dimensional design spaces. Recent learning-based approaches, particularly reinforcement learning [4][5][6][7][8][9][10][11][12][13], demonstrate improved scalability for larger circuits, yet suffer from poor sample efficiency and prohibitive simulation costs. Moreover, their reliance on manually encoded design knowledge for circuit partitioning limits autonomy and scalability. Being purely data-driven, these models exhibit an inherent black-box nature [14], hindering their ability to capture circuit intuition or physical causality. Consequently, they often fail to generalize across architectures or incremental design updates, while lacking explainability and eroding designer trust in result quality (Fig. 1(a)).
Most recently, Large Language Model (LLM)-based methods [14][15][16][17][18][19][20][21][22][23][24] have shown great potential in advancing AMS design automation. By leveraging their cognitive reasoning and agentic capabilities, these models can emulate key aspects of human design workflows, offering a promising pathway toward more intelligent and autonomous analog design systems. Yet, current Vanilla LLM reasoning for AMS design remains largely opaque and often inconsistent, undermining both their credibility and practical effectiveness (Fig. 1(b)). Moreover, existing LLM-aided approaches lack mechanisms to adaptively balance design reuse and redesign within topology-sizing co-optimization. Consequently, they frequently re-optimize entire circuits from scratch when specifications change, leading to catastrophic forgetting [25] of valuable prior design knowledge. In real AMS workflows, many subcircuits are already layout-planned, variation-optimized [26][27][28][29][30], or even silicon-proven, making such full re-optimization impractical. This lack of architectural and contextual awareness causes redundant computation, poor sample efficiency, and inconsistent reliability, ultimately hindering LLM deployability in industrial design flows. To fully harness the potential of LLMs enriched with human design knowledge for AMS automation, we propose HeaRT, an analytically guided, agentic foundation reasoning framework that enables reasoning-grounded downstream applications. HeaRT draws inspiration from the hierarchical abstraction principles underlying human circuit design, a perspective largely underexplored in current LLM-based approaches. By constructing a human-design-inspired Hierarchical Circuit Reasoning Tree, HeaRT enables efficient, realtime, and context-aware reasoning, producing query-conditioned reasoning traces that improve interpretability and debugging [31] (Fig. 1(c)). Our key contributions are summarized as follows:
• We develop HeaRT, a novel analytically-guided multi-level, agentic reasoning framework that performs top-down KCLand graph-guided circuit decomposition with hierarchical organization, followed by bottom-up, context-aware knowledge consolidation into a persistent hierarchical knowledge graph that anchors subsequent LLM reasoning. • We introduce a task-specific, rank-based retrieval mechanism that selects and plugs in suitable topologies for contextaware, performance-driven reconfiguration and sizing refinement, while maintaining electrical correctness. • We rigorously evaluate HeaRT across a 40-circuit AMS benchmark spanning diverse circuit types and complexity tiers, where it consistently achieves >97% reasoning accuracy and >98% Pass@1, while operating at <0.5× the real-time token budget of existing baselines. HeaRT further delivers over 3× faster convergence on both sizing and topologyadaptation tasks across multiple optimization algorithms, preserving prior design intent under evolving specifications. • This pioneers a successful effort toward advancing AMS design automation frameworks into an intelligent, adaptive and practical paradigm capable of context-aware decisionmaking across topology and sizing optimization tasks.
The emergence of reasoning-enabled LLMs endowed with multimodal understanding and agentic decision-making capabilities [32][33][34][35][36][37][38][39][40] has redefined autonomous problem-solving across diverse domains, motivating their evaluation within EDA. However, LLMs exhibit inherent challenges such as hallucinations [41], instability, and inconsistency, compounded by their lack of traceable reasoning. These issues conflict with the precision, determinism, and verifiability required in circuit design, undermining their credibility for practical deployment [31]. Moreover, data scarcity in the AMS domain [31,42], driven by industrial confidentiality, intellectual property restrictions, and
This content is AI-processed based on ArXiv data.