AI applications have emerged in current world. Among AI applications, computer-vision (CV) related applications have attracted high interest. Hardware implementation of CV processors necessitates a high performance but low-power image detector. The key to energy-efficiency work lies in analog-digital converting, where output of imaging detectors is transferred to digital domain and CV algorithms can be performed on data. In this paper, analog-digital converter architectures are compared, and an example ADC design is proposed which achieves both good performance and low power consumption.
ITH the advent of artificial intelligence (AI), new applications have been proposed. In AI applications, CV applications have attracted especially attention [1][2][3][4][5], as CV applications can help people do a lot of routine works in daily life, such as face detection, pose estimation [2], image processing [3] etc.
The algorithms of AI have evolved at quick pace during previous a few years. In 2012, AlexNet [1] was proposed and it has been proved to be a large step for both deep learning and AI. With the success of deep learning, people are using it for all kinds of applications in daily life. However, hardware side of AI and deep learning only gets relatively less attention, though it is equally important.
CV algorithms need to be used in daily life, and embedded system is an ideal platform for daily uses (such as on mobile phone or wearable devices). However, common CV application algorithms need heavy computation load.
For example, hand-object interaction is commonly used in augmented reality applications. A new and efficient hand-object interaction algorithm that can provide hand segmentation from depth map has been proposed in [2]. In [2], bilateral filtering is used for data processing and excellent classification results can be achieved in proposed algorithm. However, the algorithm runs on a powerful server with GPU, which is not applicable for wearable devices such as HoloLens.
In order to be used in wearable devices, vision processors must be used. Compared with general-purpose GPU, vision processor is supposed to process CV tasks, and specially designed ASIC can be used as vision processor. Vision processor can provide energy-efficient operation.
A typical vision processor is composed of both analog and digital parts, as shown in Fig. 1. The image sensor part senses image, and analog signal is generated by it. In order to perform CV algorithms in digital domain, analog signal must first be converted by analog-digital converter (ADC). Another key metric of ADC is power consumption. In order to improve energy efficiency of vision processor, ADC must consume low power.
CMOS technology has been evolving at rapid pace and can be used to implement many high precision, high speed circuits [6][7]. To achieve an efficient vision system, CMOS front end sensor array is also required to be low power, thus to request an efficient ADC as the front end sensing block to bridge the analog signal and digital signal processing world. In this section, we compare three most common ADC architectures [8].
The first category o ADC is sigma-delta ADC, as shown in Fig. 2. Sigma-delta (SD) ADC uses SD modulator, which turns oversampling into SNR enhancement. With SD ADC, very high resolution can be achieved. However, considering oversampling ratio, the real Nyquist rate of SD ADC is usually not high.
Li Du, Yilei Li W Fig. 2
For image sensor applications, many sensors usually timeshare one same ADC, and each ADC will do A/D conversion in a time-interleaving manner. Therefore one ADC needs to sample at moderate speed with low power. Though one SD ADC can serve one image sensor pixel at low power consumption, it would be consuming too much power if we implement many SD ADCs for image sensor pixels, or one fast SD ADC to time share.
The second category of ADC is flash ADC (Fig. 3). Flash ADC can achieve the highest sampling rate among all categories of ADCs, but its resolution is limited by mismatch among resistor ladder. Mismatch trimming (using laser) can be implemented to enhance resolution, but the cost is too high. The third architecture is pipelined ADC (Fig. 4). Pipelined structure, as one of the typical architectures has been widely implemented in the ADC design. A low power, middle-resolution (7~10 bit), middle speed (20MHz-200MHz) pipelined ADC has been used in many applications. Fig. 4 Pipeline ADC Pipeline ADC uses multiple stages, each stage will turn input analog signal into coarse digital code, amplify residue analog signal and send to the next stage. The main issues with pipeline ADC are its power consumption and conversion error. For power consumption, since each amplifier is used in each stage, power consumption can be high. For conversion error, mismatch between stages can introduce significant resolution degradation and nonlinearity of ADC.
Fortunately those two problems with pipeline ADC can be solved by design techniques. Various power reduction techniques have been developed for pipelined ADCs, such as gain calibration for the sample and hold amplifier, flash ADC-based MSB quantization, and removing the front-end sample/hold amplifiers [9].
A group of calibration techniques have been developed to compensate the most significant MDAC gain error. In [10][11], a reference ADC was used to calibrate a single nonlinear MDAC by estimating its 3rd-order harmonic term. Complicated adaptation algorithms, such as least-mean-square (LMS), however, are needed for the estimation of MDAC parameters. In
This content is AI-processed based on open access ArXiv data.