From Wikipedia, the free encyclopedia - View original article

"Successive Approximation" redirects here. For behaviorist B.F. Skinner's method of guiding learned behavior, see Shaping (psychology).

A **successive approximation ADC** is a type of analog-to-digital converter that converts a continuous analog waveform into a discrete digital representation via a binary search through all possible quantization levels before finally converging upon a digital output for each conversion.

**Key**

- DAC = Digital-to-Analog converter
- EOC = end of conversion
- SAR = successive approximation register
- S/H = sample and hold circuit
- V
_{in}= input voltage - V
_{ref}= reference voltage

The successive approximation Analog to digital converter circuit typically consists of four chief subcircuits:

- A sample and hold circuit to acquire the input voltage (V
_{in}). - An analog voltage comparator that compares V
_{in}to the output of the internal DAC and outputs the result of the comparison to the successive approximation register (SAR). - A successive approximation register subcircuit designed to supply an approximate digital code of V
_{in}to the internal DAC. - An internal reference DAC that, for comparison with V
_{REF}, supplies the comparator with an analog voltage equal to the digital code output of the SAR_{in}.

- A sample and hold circuit to acquire the input voltage (V

The successive approximation register is initialized so that the most significant bit (MSB) is equal to a digital 1. This code is fed into the DAC, which then supplies the analog equivalent of this digital code (V_{ref}/2) into the comparator circuit for comparison with the sampled input voltage. If this analog voltage exceeds V_{in} the comparator causes the SAR to reset this bit; otherwise, the bit is left a 1. Then the next bit is set to 1 and the same test is done, continuing this binary search until every bit in the SAR has been tested. The resulting code is the digital approximation of the sampled input voltage and is finally output by the SAR at the end of the conversion (EOC).

Mathematically, let V_{in} = xV_{ref}, so x in [-1, 1] is the normalized input voltage. The objective is to approximately digitize x to an accuracy of 1/2^{n}. The algorithm proceeds as follows:

- Initial approximation x
_{0}= 0. - ith approximation x
_{i}= x_{i-1}- s(x_{i-1}- x)/2^{i}.

- Initial approximation x

where, s(x) is the signum-function(sgn(x)) (+1 for x ≥ 0, -1 for x < 0). It follows using mathematical induction that |x_{n} - x| ≤ 1/2^{n}.

As shown in the above algorithm, a SAR ADC requires:

- An input voltage source V
_{in}. - A reference voltage source V
_{ref}to normalize the input. - A DAC to convert the ith approximation x
_{i}to a voltage. - A Comparator to perform the function s(x
_{i}- x) by comparing the DAC's voltage with the input voltage. - A Register to store the output of the comparator and apply x
_{i-1}- s(x_{i-1}- x)/2^{i}.

- An input voltage source V

One of the most common implementations of the successive approximation ADC, the *charge-redistribution* successive approximation ADC, uses a charge scaling DAC. The charge scaling DAC simply consists of an array of individually switched binary-weighted capacitors. The amount of charge upon each capacitor in the array is used to perform the aforementioned binary search in conjunction with a comparator internal to the DAC and the successive approximation register.

- First, the capacitor array is completely discharged to the offset voltage of the comparator, V
_{OS}. This step provides automatic offset cancellation(i.e. The offset voltage represents nothing but dead charge which can't be juggled by the capacitors). - Next, all of the capacitors within the array are switched to the input signal,
*v*_{IN}. The capacitors now have a charge equal to their respective capacitance times the input voltage minus the offset voltage upon each of them. - In the third step, the capacitors are then switched so that this charge is applied across the comparator's input, creating a comparator input voltage equal to -
*v*_{IN}. - Finally, the actual conversion process proceeds. First, the MSB capacitor is switched to V
_{REF}, which corresponds to the full-scale range of the ADC. Due to the binary-weighting of the array the MSB capacitor forms a 1:1 charge divider with the rest of the array. Thus, the input voltage to the comparator is now -*v*_{IN}plus V_{REF}/2. Subsequently, if*v*_{IN}is greater than V_{REF}/2 then the comparator outputs a digital 1 as the MSB, otherwise it outputs a digital 0 as the MSB. Each capacitor is tested in the same manner until the comparator input voltage converges to the offset voltage, or at least as close as possible given the resolution of the DAC.

- First, the capacitor array is completely discharged to the offset voltage of the comparator, V

When implemented as an analog circuit - where the value of each successive bit is not perfectly 2^N (e.g. 1.1, 2.12, 4.05, 8.01, etc.) - a successive approximation approach might not output the ideal value because the binary search algorithm incorrectly removes what it believes to be half of the values the unknown input cannot be. Depending on the difference between actual and ideal performance, the maximum error can easily exceed several LSBs, especially as the error between the actual and ideal 2^N becomes large for one or more bits. Since we don't know the actual unknown input, it is therefore very important that accuracy of the analog circuit used to implement a SAR ADC be very close to the ideal 2^N values; otherwise, we cannot guarantee a best match search.

**RECENT IMPROVEMENTS**

- New SAR ADC include now calibration to improve their accuracy from less than 10bits to up to 18bits
- Another new technique use non-binary weighted DAC and/or redundancy to solve the problem of non-ideal analog circuits and improve speed

**ADVANTAGES**

- The conversion time is equal to the "n" clock cycle period for an n-bit ADC. Thus conversion time is very short. For example for a 10-bit ADC with a clock frequency of 1 MHz, the conversion time will be only 10*10^-6 i.e. 10 microseconds.
- Conversion time is constant and independent of the amplitude of analog signal V to the base A

- R. J. Baker,
*CMOS Circuit Design, Layout, and Simulation, Third Edition*, Wiley-IEEE, 2010. ISBN 978-0-470-88132-3