View text source at Wikipedia
In signal processing, oversampling is the process of sampling a signal at a sampling frequency significantly higher than the Nyquist rate. Theoretically, a bandwidth-limited signal can be perfectly reconstructed if sampled at the Nyquist rate or above it. The Nyquist rate is defined as twice the bandwidth of the signal. Oversampling is capable of improving resolution and signal-to-noise ratio, and can be helpful in avoiding aliasing and phase distortion by relaxing anti-aliasing filter performance requirements.
A signal is said to be oversampled by a factor of N if it is sampled at N times the Nyquist rate.
There are three main reasons for performing oversampling: to improve anti-aliasing performance, to increase resolution and to reduce noise.
Oversampling can make it easier to realize analog anti-aliasing filters.[1] Without oversampling, it is very difficult to implement filters with the sharp cutoff necessary to maximize use of the available bandwidth without exceeding the Nyquist limit. By increasing the bandwidth of the sampling system, design constraints for the anti-aliasing filter may be relaxed.[2] Once sampled, the signal can be digitally filtered and downsampled to the desired sampling frequency. In modern integrated circuit technology, the digital filter associated with this downsampling is easier to implement than a comparable analog filter required by a non-oversampled system.
In practice, oversampling is implemented in order to reduce cost and improve performance of an analog-to-digital converter (ADC) or digital-to-analog converter (DAC).[1] When oversampling by a factor of N, the dynamic range also increases a factor of N because there are N times as many possible values for the sum. However, the signal-to-noise ratio (SNR) increases by , because summing up uncorrelated noise increases its amplitude by , while summing up a coherent signal increases its average by N. As a result, the SNR increases by .
For instance, to implement a 24-bit converter, it is sufficient to use a 20-bit converter that can run at 256 times the target sampling rate. Combining 256 consecutive 20-bit samples can increase the SNR by a factor of 16, effectively adding 4 bits to the resolution and producing a single sample with 24-bit resolution.[3][a]
The number of samples required to get bits of additional data precision is
To get the mean sample scaled up to an integer with additional bits, the sum of samples is divided by :
This averaging is only effective if the signal contains sufficient uncorrelated noise to be recorded by the ADC.[3] If not, in the case of a stationary input signal, all samples would have the same value and the resulting average would be identical to this value; so in this case, oversampling would have made no improvement. In similar cases where the ADC records no noise and the input signal is changing over time, oversampling improves the result, but to an inconsistent and unpredictable extent.
Adding some dithering noise to the input signal can actually improve the final result because the dither noise allows oversampling to work to improve resolution. In many practical applications, a small increase in noise is well worth a substantial increase in measurement resolution. In practice, the dithering noise can often be placed outside the frequency range of interest to the measurement, so that this noise can be subsequently filtered out in the digital domain—resulting in a final measurement, in the frequency range of interest, with both higher resolution and lower noise.[4]
If multiple samples are taken of the same quantity with uncorrelated noise[b] added to each sample, then because, as discussed above, uncorrelated signals combine more weakly than correlated ones, averaging N samples reduces the noise power by a factor of N. If, for example, we oversample by a factor of 4, the signal-to-noise ratio in terms of power improves by factor of four which corresponds to a factor of two improvement in terms of voltage.
Certain kinds of ADCs known as delta-sigma converters produce disproportionately more quantization noise at higher frequencies. By running these converters at some multiple of the target sampling rate, and low-pass filtering the oversampled signal down to half the target sampling rate, a final result with less noise (over the entire band of the converter) can be obtained. Delta-sigma converters use a technique called noise shaping to move the quantization noise to the higher frequencies.
Consider a signal with a bandwidth or highest frequency of B = 100 Hz. The sampling theorem states that sampling frequency would have to be greater than 200 Hz. Sampling at four times that rate requires a sampling frequency of 800 Hz. This gives the anti-aliasing filter a transition band of 300 Hz ((fs/2) − B = (800 Hz/2) − 100 Hz = 300 Hz) instead of 0 Hz if the sampling frequency was 200 Hz. Achieving an anti-aliasing filter with 0 Hz transition band is unrealistic whereas an anti-aliasing filter with a transition band of 300 Hz is not difficult.
The term oversampling is also used to denote a process used in the reconstruction phase of digital-to-analog conversion, in which an intermediate high sampling rate is used between the digital input and the analog output. Here, digital interpolation is used to add additional samples between recorded samples, thereby converting the data to a higher sample rate, a form of upsampling. When the resulting higher-rate samples are converted to analog, a less complex and less expensive analog reconstruction filter is required. Essentially, this is a way to shift some of the complexity of reconstruction from analog to the digital domain. Oversampling in the ADC can achieve some of the same benefits as using a higher sample rate at the DAC.
Without increasing the sample rate, we would need to design a very sharp filter that would have to cutoff [sic] at just past 20kHz and be 80-100dB down at 22kHz. Such a filter is not only very difficult and expensive to implement, but may sacrifice some of the audible spectrum in its roll-off.