Analog circuit optimization is typically framed as black-box search over arbitrary smooth functions, yet device physics constrains performance mappings to structured families: exponential device laws, rational transfer functions, and regime-dependent dynamics. Off-the-shelf Gaussian-process surrogates impose globally smooth, stationary priors that are misaligned with these regime-switching primitives and can severely misfit highly nonlinear circuits at realistic sample sizes (50--100 evaluations). We demonstrate that pre-trained tabular models encoding these primitives enable reliable optimization without per-circuit engineering. Circuit Prior Network (CPN) combines a tabular foundation model (TabPFN v2) with Direct Expected Improvement (DEI), computing expected improvement exactly under discrete posteriors rather than Gaussian approximations. Across 6 circuits and 25 baselines, structure-matched priors achieve $R^2 \approx 0.99$ in small-sample regimes where GP-Matérn attains only $R^2 = 0.16$ on Bandgap, deliver $1.05$--$3.81\times$ higher FoM with $3.34$--$11.89\times$ fewer iterations, and suggest a shift from hand-crafting models as priors toward systematic physics-informed structure identification. Our code will be made publicly available upon paper acceptance.
翻译:模拟电路优化通常被视为对任意平滑函数的黑盒搜索,然而器件物理特性将性能映射约束为具有特定结构的函数族:指数型器件定律、有理传递函数以及状态相关的动态特性。现成的高斯过程代理模型采用全局平滑且平稳的先验分布,这些先验与上述状态切换的基本特性不匹配,在现实样本量(50-100次评估)下可能严重失配高度非线性的电路。我们证明,通过预训练编码这些基本特性的表格模型,无需针对每个电路进行专门设计即可实现可靠的优化。电路先验网络(CPN)将表格基础模型(TabPFN v2)与直接期望改进(DEI)相结合,在离散后验分布下精确计算期望改进值,而非依赖高斯近似。在6个电路和25个基准测试中,结构匹配的先验分布在小样本条件下达到$R^2 \approx 0.99$(其中GP-Matérn在带隙基准上仅获得$R^2 = 0.16$),以$3.34$-$11.89$倍的更少迭代次数实现$1.05$-$3.81$倍的更高品质因数,并表明研究范式应从手工设计先验模型转向系统化的物理信息结构识别。我们的代码将在论文录用后公开。