Notes:


There are two kinds of situations where its needed:
1) real-time effects ("FX") processing
2) real-time synthesis influenced by external controllers

In (1), we have an incoming audio signal that is to be processed in some way (echo/flange/equalization/etc. etc.) and then delivered back to the output audio stream. If the delay between the input and output is more than a few msecs, there are several possible consequences:

* if the original source was acoustic (non-electronic), and the processed material is played back over monitors close to the acoustic source, you will get interesting filtering effects as the two signals interfere with each other.

* the musician will get confused by material in the processed stream arriving "late"

* the result may be useless.

In (2), a musician is using, for example, a keyboard that sends MIDI "note on/note off" messages to the computer which are intended to cause the synthesis engine to start/stop generating certain sounds. If the delay between the musician pressing the key and hearing the sound exceeds about 5msec, the system will feel difficult to play; worse, if there is actual jitter in the +/- 5msec range, it will feel impossible to play.

(Paul Barton-Davis. Personal email, 31 December 2000)