Latency (audio)

Latency refers to a short period of delay (usually measured in milliseconds) between when an audio signal enters and when it emerges from a system. Potential contributors to latency in an audio system include analog-to-digital conversion, buffering, digital signal processing, transmission time, digital-to-analog conversion and the speed of sound in air.

Broadcast audio

Audio latency can be experienced in broadcast systems where someone is contributing to a live broadcast over a satellite or similar link with high delay, where the person in the main studio has to wait for the contributor at the other end of the link to react to questions. Latency in this context could be between several hundred milliseconds and a few seconds. Dealing with audio latencies as high as this takes special training in order to make the resulting combined audio output reasonably acceptable to the listeners. Wherever practical, it is important to try to keep live production audio latency low throughout the production system in order to keep the reactions and interchange of participants as natural as possible. A latency of 10 milliseconds or better is the target for audio circuits within professional production structures.[1]

Telephone calls

In all systems, latency can be said to consist of three elements: codec delay, playout delay and network delay.

Latency in telephone calls is sometimes referred to as mouth-to-ear delay; the telecommunications industry also uses the term quality of experience (QoE). Voice quality is measured according to the ITU model; measurable quality of a call degrades rapidly where the mouth-to-ear delay latency exceeds 200 milliseconds. The mean opinion score (MOS) is also comparable in a near-linear fashion with the ITU's quality scale - defined in standards G.107 (page 800),[2] G.108[3] and G.109[4] - with a quality factor R ranging from 0 to 100. An MOS of 4 ('Good') would have an R score of 80 or above; to achieve 100R requires an MOS exceeding 4.5.

The ITU and 3GPP groups end user services into classes based on latency sensitivity:[5]

Very sensitive to delay Less sensitive to delay
Classes
  • Conversational Class (3GPP)
  • Interactive Class (ITU)
  • Interactive Class (3GPP)
  • Responsive Class (ITU)
  • Streaming Class (3GPP)
  • Timely Class (ITU)
  • Background Class (3GPP)
  • Non Critical Class (ITU)
Services Conversational video/voice, realtime video Voice messaging Streaming video and voice Fax
Realtime data Transactional data Non realtime data Background data

Similarly, the G.114 recommendation regarding mouth-to-ear latency indicates that most users are "very satisfied" as long as latency does not exceed 200 ms, with an according R of 90+. Codec choice also plays an important role; the highest quality (and highest bandwidth) codecs like G.711 are usually configured to incur the least encode-decode latency, so on a network with sufficient throughput sub-100 ms latencies can be achieved. G.711 is the encoding method used on nearly all PSTN/POTS networks, at a bitrate of 64 kbit/s.

Cellular calls

The AMR narrowband codec, used currently in UMTS networks, is a low bitrate, highly compressed, adaptive bitrate codec achieving rates from 4.75 to 12.2 kbit/s with 'toll quality' (MOS 4.0 or above) from 7.4 kbit/s. 2G networks use the AMR-12.2 codec, equivalent to GSM-EFR. As mobile operators upgrade existing best-effort networks to support concurrent multiple types of service over all-IP networks, services such as Hierarchical Quality of Service (H-QoS) allow for per-user, per-service QoS policies to prioritise time-sensitive protocols like voice calls and other wireless backhaul traffic. Along with more efficient voice codecs, this helps to maintain a sufficient MOS rating whilst the volume of overall traffic on often oversubscribed mobile networks increases with demand.[6][7][8]

Another overlooked aspect of mobile latency is the inter-network handoff; as a customer on Network A calls a Network B customer the call must traverse two separate Radio Access Networks, two core networks and an interlinking Gateway Mobile Switching Centre (GMSC) which performs the physical interconnecting between the two providers.[9]

IP calls

On a stable connection with sufficient bandwidth and minimal latency, VoIP systems typically have a minimum of 20 ms inherent latency and target 150 ms as a maximum latency for general consumer use. With end-to-end QoS managed and assured rate connections, latency can be reduced to analogue PSTN/POTS levels. Latency is a larger consideration in these systems when an echo is present therefore popular VoIP codecs such as G.729 perform complex voice detection and noise suppression.[10]

Computer audio

Latency can be a particular problem in audio platforms, for instance the standard Microsoft Windows audio drivers which can cause latency up to 500 ms. Supported interface optimization will reduce the delay down to times that are too short for the human ear to detect. By altering the buffer sizes down to the lowest functioning settings, buildup of delay can be eliminated without causing stuttering of the audio.[11] A popular solution to combat this is Steinberg's ASIO, which bypasses these layers and connects audio signals directly to the sound card's hardware. Many professional and semi-professional audio applications utilize the ASIO driver, allowing users to work with audio in real time.[12] Protools HD offers a low latency system similar to ASIO. Protools 10 and 11 are also compatible with ASIO interface drivers

The RT-kernel (RealTime-kernel)[13] is a modified Linux-kernel, that alters the standard timer frequency the Linux kernel uses and gives all processes or threads the ability to have realtime-priority. (This means, that a time-critical process like an audio-stream can get priority over another, less-critical process like network activity. This is also configurable per user (for example, the processes of user "tux" could have priority over processes of user "nobody" or over the processes of several system daemons). On a standard Linux-system, this is possible with only one process at the same time.

Digital television audio

Many modern digital television receivers, such as standalone TV sets and set-top boxes use sophisticated audio processing, which can create a delay between the time when the audio signal is received and the time when it is heard on the speakers. Since many of these TVs also cause delays in processing the video signal this can result in the two signals being sufficiently synchronized to be unnoticeable by the viewer. However, if the difference between the audio and video delay is significant, the effect can be disconcerting. Some TVs have a "lip sync" setting that allows the audio lag to be adjusted to synchronize with the video, and others may have advanced settings where some of the audio processing steps can be turned off.

Audio lag is also a significant detriment in rhythm games, where precise timing is required to succeed. Most of these games have a lag calibration setting where upon the game will adjust the timing windows by a certain number of milliseconds to compensate. In these cases, the notes of a song will be sent to the speakers before the game even receives the required input from the player in order to maintain the illusion of rhythm. Games that rely upon "freestyling", such as Rock Band drums or DJ Hero, can still suffer tremendously, as the game cannot predict what the player will hit in these cases, and excessive lag will still create a noticeable delay between hitting notes and hearing them play.

Audio transmission over the Internet

Signal travels through optical network cables at about 2/3 the speed of light in vacuum. At this speed, every 588 km adds roughly 3 milliseconds of latency. The fastest that audio can circle the globe is thus about 200 milliseconds. In practice, network latency is higher because the path a signal takes between two nodes is not a straight line, and because of the signal processing that also occurs along the way.

Audio latency over the Internet is too high for practical real-time coordination of musicians. It might be possible in the future to have real time collaboration within a radius of about 1000 km.[14]

Live performance audio

Latency in live performance occurs naturally from the time it takes sound to transmit through air. It takes sound about 3 milliseconds to travel 1 meter.[14] Small amounts of latency occur between performers depending on how they are spaced from each other and from stage monitors if these are used. This creates a practical limit to how far apart the artists in a group can be from one another. Stage monitoring extends that limit, as sound travels close to the speed of light through the cables that connect stage monitors.

Performers, particularly in large spaces, will also hear reverberation, or echo of their music, as the sound that projects from stage bounces off of walls and structures, and returns with latency and distortion. A primary purpose of stage monitoring is to provide artists with more primary sound so that they are not thrown by the latency of these reverberations.

Live signal processing

Professional digital audio equipment has latency associated with two general processes: conversion from one format to another, and digital signal processing (DSP) tasks such as equalization, compression and routing. Analog audio equipment has no appreciable latency.

Digital conversion processes include analog-to-digital converters (ADC), digital-to-analog converters (DAC), and various changes from one digital format to another, such as AES3 which carries low-voltage electrical signals to ADAT, an optical transport. Any such process takes a small amount of time to accomplish; typical latencies are in the range of 0.2 to 1.5 milliseconds, depending on sampling rate, bit depth, software design and hardware architecture.[15]

DSP can take several forms; for instance, finite impulse response (FIR) and infinite impulse response (IIR) filters take two different mathematical approaches to the same end and can have different latencies, depending on the lowest audio frequency that is being processed as well as on software and hardware implementations. Typical latencies range from 0.5 to ten milliseconds with some designs having as much as 30 milliseconds.[16]

Individual digital audio devices can be designed with a fixed overall latency from input to output or they can have a total latency that fluctuates with changes to internal processing architecture. In the latter design, engaging additional functions adds latency.

Latency in digital audio equipment is most noticeable when a singer's voice is transmitted through their microphone, through digital audio mixing, processing and routing paths, then sent to their own ears via in ear monitors or headphones. In this case, the singer's vocal sound is conducted to their own ear through the bones of the head, then through the digital pathway to their ears a few milliseconds later. In one study listeners found latency greater than 15ms to be noticeable.[17]

Latency for other musical activity such as playing a guitar does not have the same critical concern. Ten milliseconds of latency isn't as noticeable to a listener who is not hearing his or her own voice.[18]

Delayed loudspeakers

In audio reinforcement for music or speech presentation in large venues, it is optimal to deliver sufficient sound volume to the back of the venue without resorting to excessive sound volumes near the front. One way for audio engineers to achieve this is to use additional loudspeakers placed at a distance from the stage but closer to the rear of the audience. Sound travels through air at the speed of sound (around 343 metres (1,125 ft) per second depending on air temperature and humidity). By measuring or estimating the difference in latency between the loudspeakers near the stage and the loudspeakers nearer the audience, the audio engineer can introduce an appropriate delay in the audio signal going to the latter loudspeakers, so that the wavefronts from near and far loudspeakers arrive at the same time. Because of the Haas effect an additional 15 milliseconds can be added to the delay time of the loudspeakers nearer the audience, so that the stage's wavefront reaches them first, to focus the audience's attention on the stage rather than the local loudspeaker. The slightly later sound from delayed loudspeakers simply increases the perceived sound level without negatively affecting localization.

See also

References

  1. Introduction to Livewire (PDF), Axia Audio, April 2007, retrieved 2011-06-21
  2. "G.107 : The E-model: a computational model for use in transmission planning" (PDF). International Telecommunications Union. 2000-06-07. Retrieved 2013-01-14.
  3. "G.108 : Application of the E-model: A planning guide" (PDF). International Telecommunications Union. 2000-07-28. Retrieved 2013-01-14.
  4. "G.109 : Definition of categories of speech transmission quality - ITU" (PDF). International Telecommunications Union. 2000-05-11. Retrieved 2013-01-14.
  5. O3b Networks and Sofrecom. "Why Latency Matters to Mobile Backhaul - O3b Networks" (PDF). O3b Networks. Retrieved 2013-01-11.
  6. Nir, Halachmi; O3b Networks and Sofrecom (2011-06-17). "HQoS Solution". Telco.com. Retrieved 2013-01-11.
  7. Cisco. "Architectural Considerations for Backhaul of 2G/3G and Long Term Evolution Networks". Cisco Whitepaper. Cisco. Retrieved 2013-01-11.
  8. "White paper: The impact of latency on application performance" (PDF). Nokia Siemens Networks. 2009. Retrieved 2013-01-11. |first1= missing |last1= in Authors list (help)
  9. "GSM Network Architecture". GSM for Dummies. Retrieved 2013-01-11.
  10. Michael Dosch and Steve Church. "VoIP In The Broadcast Studio". Axia Audio. Retrieved 2011-06-21.
  11. Huber, David M., and Robert E. Runstein. "Latency." Modern Recording Techniques. 7th ed. New York and London: Focal, 2013. 252. Print.
  12. JD Mars. Better Latent Than Never: A long overdue discussion of audio latency issues
  13. Real-Time Linux Wiki
  14. 1 2 Music Collaboration Will Never Happen Online in Real Time
  15. AES E-Library: Latency Issues in Audio Networking by Fonseca, Nuno; Monteiro, Edmundo
  16. ProSoundWeb. David McNell. Networked Audio Transport: Looking at the methods and factors Archived March 21, 2008, at the Wayback Machine.
  17. Whirlwind. Opening Pandora's Box? The "L" word - latency and digital audio systems
  18. Whirlwind. Opening Pandora's Box? The "L" word - latency and digital audio systems
This article is issued from Wikipedia - version of the 11/14/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.