A frequent claim by detractors of digital audio is that the time resolution is equal to the sampling interval, 22.7 μs for the CD format. This is incorrect. Although there is a limit, it is much smaller, and it does not depend on the sample rate.

The sampling theorem shows that a band-limited signal can be sampled and reconstructed exactly if the sample rate is higher than twice the bandwidth. The catch is that it assumes infinite precision in the samples, which of course is not the case with 16-bit or even 24-bit audio. A finite precision does introduce a small error. However, and this is crucial, this error is unrelated to the sample interval.

Consider a sine wave sampled at regular intervals. Now shift the sine wave sideways a little as shown by the dashed curve in figure 1. Notice that the values where the curve crosses the vertical grid lines have all changed even though the shift is smaller than one interval.

Sine waves

The time resolution of the sampled signal is equivalent to the minimum shift required to produce a change in the sampled values. The amount by which a sample value changes with a given time shift depends on the slope of the signal around the sample point. Higher frequencies have steeper slopes and thus smaller displacements can be recorded.

Going back to the sine wave, we observe that it has the steepest slopes at the zero crossings. This maximum slope for a sine wave of frequency \(f\) and amplitude \(a\) is \(2πfa\). Let \(d_{min}\) be the smallest recordable difference in sample values. In the immediate vicinity of the zero crossing, the sine wave can be approximated by a straight line, so we are looking for the time \(t_{min}\) at which \(2πfat_{min} = d_{min}\). Solving for \(t_{min}\) yields the following equation: \[t_{min} = \frac{d_{min}}{2πfa}\]

For a sample precision of \(b\) bits, the size of one step (the value of the LSB) is \(2 / (2^b – 1)\). To trigger a change from 0 to 1 in the LSB, the value of the sine wave must reach half this amount. In other words, \(d_{min} = 1 / (2^b – 1)\). Substituting this into the previous equation, we arrive at the final formula: \[t_{min} = \frac{1}{2πfa(2^b – 1)}\]

With CD quality audio, 16 bits at 44.1 kHz, the best-case time resolution is obtained with a full-scale signal at 22.05 kHz. The above formula then yields \(t_{min} = 1 / (2π \times 22050\ \mathrm{Hz} \times 1 \times (2^{16} – 1)) = 110\ \mathrm{ps}\). For a more typical 1 kHz signal at -20 dB, i.e. with an amplitude of 0.1, the same calculation produces a value of 24 ns. Although not nearly as good as the best case, it is still 1000 times better than the erroneously claimed limit of one sample interval.

Notice that the calculation of the time resolution does not include the sample rate. Nevertheless, it can make a difference due to the frequency of the signal being part of the formula. A higher sample rate permits higher-frequency signals, which means smaller time shifts can be measured at a given sample resolution. Of course, this only matters if the signal in question actually contains such high-frequency components, and acoustically recorded music generally does not. It is also doubtful that such small time differences are in any way audible.

2 Replies to “Time resolution of digital audio”

  1. Even though the result is correct there is a typing mistake 20050 Hz instead of 22050 Hz in the timing resolution formula.
    Rgds.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.