Please use this identifier to cite or link to this item: http://hdl.handle.net/1783.1/4712

Leakage power modeling and reduction techniques for nanometer scale VLSI circuits

Authors Au, Yi-ching
Issue Date 2004
Summary Minimizing dynamic power consumption in digital circuits was the primary design objective in most of the existing low power design methodology. Voltage reduction was proven to be the best strategy to minimize the dynamic power due to its quadratic effect in dynamic power consumption. The drawback of such aggressive strategy is the delay penalty. With the continuing shrinking in the feature size of the process technology, lower supply voltage can be used to reduce the dynamic power consumption. However, threshold voltage of the transistor needs to be scaled down as well in order to maintain the circuit performance. With the scaling down of the threshold voltage, there is an exponential increase in sub-threshold conduction power. For the early submicron process technology of which the feature size is larger than 0.18μm, the sub-threshold leakage power is not very significant when comparing with the dynamic power consumption. However in the new nanometer scale process of which the device dimension is smaller than 0.10μm, the leakage power in high ambient temperature was shown to have the same order of magnitude as the dynamic power. As a result, leakage power reduction technique became very important and it became a hot research topic recently. Most of the leakage reduction techniques are targeted at the circuit design level. Some of the example are Multiple Threshold CMOS (MTCMOS) [1], Variable Threshold CMOS (VTCMOS) [2], and Dynamic Threshold MOSFET (DTMOS) [3], etc. However, most of the proposed circuit techniques suffer from severe delay penalty except Johnson's work [4]. He discovered that self-reverse bias is made when two or more "off" transistors are stacked in a single series path and hence the leakage power is reduced drastically. Base on Johnson's proposal, it can be shown that the leakage power of a logic network is a function of its primary input vector. In fact, some of the previous works have been done on finding the least leakage vector at the primary inputs to reduce the leakage power at the standby time [5, 6, 7, 8, 9, 10]. However, these techniques are based on the synthesized Boolean network (i.e. the final implementation of the Boolean function). This may not achieve the overall optimal design as it limits the choice of gate in minimizing the leakage power consumption. Besides, the characteristic of the reactive computing power consumption is being ignored as well. In addition, the ambient temperature profile is assumed to be fixed in most of the recent research works. For burst operation, the ambient temperature is varying whenever the operation mode is switching. If this effect is not considered in the design phase, the quality of solution in logic synthesis may not be optimized. In order to achieve the maximum total power reduction, both temperature transient effect and operating duty cycle are taken into account during logic synthesis to find the optimal design. In particular, we propose an input vector assisted technology mapping algorithm together with a modified power cost function which takes the above two factors into account. Experimental results show that significant saving in total power consumption (i.e. the sum of the dynamic and the leakage power) can be achieved.
Note Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2004
Subjects
Language English
Format Thesis
Access
Files in this item:
File Description Size Format
th_redirect.html 339 B HTML