Customized Routing Optimization Based on Gradient Boost Regressor Model

Reading time: 5 minute
...

📝 Abstract

In this paper, we discussed limitation of current electronic-design-automoation (EDA) tool and proposed a machine learning framework to overcome the limitations and achieve better design quality. We explored how to efficiently extract relevant features and leverage gradient boost regressor (GBR) model to predict underestimated risky net (URN). Customized routing optimizations are applied to the URNs and results show clear timing improvement and trend to converge toward timing closure.

💡 Analysis

In this paper, we discussed limitation of current electronic-design-automoation (EDA) tool and proposed a machine learning framework to overcome the limitations and achieve better design quality. We explored how to efficiently extract relevant features and leverage gradient boost regressor (GBR) model to predict underestimated risky net (URN). Customized routing optimizations are applied to the URNs and results show clear timing improvement and trend to converge toward timing closure.

📄 Content

Customized Routing Optimization Based on Gradient Boost Regressor Model Chen Zheng
Intel Corp.
Santa Clara, 95134 Clara Grzegorz Kasprowicz Akm Semiconductor
San Jose, 95110 Carol Saunders Akm Semiconductor
San Jose, 95110
 Abstract In this paper, we discussed limitation of current electronic-design-automoation (EDA) tool and proposed a machine learning framework to overcome the limitations and achieve better design quality. We explored how to efficiently extract relevant features and leverage gradient boost regressor (GBR) model to predict underestimated risky net (URN). Customized routing optimizations are applied to the URNs and results show clear timing improvement and trend to converge toward timing closure. Keywords — machine learning, route, physical design, QoR, gradient boost regressor

  1. Introduction Semiconductor technology node has been shrinking drastically, following Moore’s Law [1]. For example, 7nm chip will be on market in 2017, and 5nm is in active research. The nanoscale transistor feature size as well as metallization minimum width and pitch brings in huge challenges to electronic-design-automation (EDA) tool in place and route space. Specifically in routing stage, extreme dense pin access requirement, high routing track congestions and rigorous design spacing rules made it impossibly difficult to close timing and design-rule-check (DRC). In such cases, generalized router function is not able to handle all types of situations well. Customized solution is needed to address hotspot, critical nets, etc. The effort spent to resolve these issues aggravate the burdens on designers and significantly hurt turn around time and design quality. In this paper, we propose a machine learning based framework that customize the routing characteristics on critical nets to improve routing quality. The approach overcomes the limitations of EDA tool and helps designers to achieve better quality of reference (QoR).
  2. Related Works Machine learning technique has been a hot research topic during recent years, it has wide application from image recognition [2] to natural language processing [3]. The machine learning application on integrated circuit design has not been vastly explored. For physical design optimization on router, most works focus on algorithm improvement [4], new feature integration [5] and design for manufacturability [6] or reliability [7]. There has been few works discussing applying machine learning technique to optimize routing on a design. In [8], A. B. Kahng discussed how to utilize machine learning technique to resolve congestion hotspot and reduce DRC count. In [9], J. Wuu discussed using machine learning for lithography hotspot detection. These works show some of the potential benefits by using machine learning technique to improve physical design quality. Thus, more exploration of machine learning in physical design is desired to help design in future technology. In this paper, we focus on the router limitation mainly due to miscorrelation between global route and detailed route. In global route, router preliminarily estimate resistance-capacitance (RC) delay [10] of a net and roughly its cross-talk noise. Only timing-critical nets get assigned to higher layers or creating shield next to it to minimize RC delay or cross talk. However, due to the inaccuracy of delay estimation and unknown detailed routing results, global router often misinterpret a large number of actual critical nets. We refer to those nets as Underestimated Risky Nets (URN). Often, those nets will eventually end up routing on lower metal layers during detailed route and introduce significant delay impact on critical paths and result in negative slacks, which require substantial amount of manual effort for timing closure. It also causes lots of reliability problems as lower layer has smaller width [11]. The route engine algorithm can be improved to have better correlation between global route and detailed route; however, it is difficult to derive an analytical cost function to evaluate the actual net delay due to large number of varying parameters and unknown relationships between each parameter and evaluated result. Also, router algorithm aims to solve general cases, and globally applying settings may fix some issues but introduce other issues [12]. For example, setting layer effort to high may fix some of the layer misassignment issues but could possibly result in large count of additional DRCs. On the other hand, this situation falls perfectly into machine learning space, and is expected to generate promising results given powerful machine learn algorithms.
  3. Feature Extraction The most important factor to obtain a successful learning model is the input vector feature selection. Irrelevant features can inject lots of noise and train the model towards a random wrong direction. Due to technology nod

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut