What we do Features Showcase Pricing Jukebox TV Blog

Maximizing Live Streaming Latency: Techniques for Achieving the Lowest Delay

What is what we call glass-to-glass latency or end-to-end latency?

Glass-to-glass latency, also known as end-to-end latency, refers to the total delay from the time a video is captured by a camera to the time it is displayed on a viewer's device. It includes all the delays that occur in the process of capturing, encoding, transmitting, and decoding the video.

Maximizing Live Streaming Latency: Techniques for Achieving the Lowest Delay

This latency is determined by the time it takes for the video to pass through all the stages of the video pipeline, including the time it takes to capture the video with a camera, the time it takes to encode the video, the time it takes to transmit the video over the network, and the time it takes to decode the video on the viewer's device.

The glass-to-glass latency is different from the streaming latency, which is the delay between the live event and when it appears on the viewer's device. Streaming latency can be affected by various factors such as network conditions, server location, and the specific streaming protocol used.

What is the standard end-to-end latency for HLS?

HLS (HTTP Live Streaming) is a protocol for streaming video over the internet, which is widely used for live streaming and on-demand video. The end-to-end latency for HLS can vary depending on a number of factors, including the quality of the video, the size of the chunks, the network conditions, and the specific implementation of the HLS player.

In general, the standard latency for HLS can range from a few seconds (18 seconds with current Apple recommendations) to several minutes. However, some advanced implementations can reduce the latency to a few seconds or even sub-seconds. This is achieved by reducing the size of the chunks, using lower latency codecs and also using a low latency mode in HLS also known as LL-HLS.

It's worth noting that the end-to-end latency for HLS is generally higher than other streaming protocols such as WebRTC, which is designed for low-latency communication and can have latencies of less than a second.

Can a decrease in video quality occur when using low latency streaming?

In general, decreasing the latency of a video stream can have an impact on the video quality, as it often requires changes to the video encoding and transmission process that can affect the visual quality of the video.

To achieve low latency, some techniques used such as reducing the video resolution, reducing the frame rate, reducing lookahead and buffers and using lower quality codecs or compression methods. These techniques can result in a lower quality video.

Also, using low latency mode in HLS protocol, which reduces the chunk size and the number of chunks in the playlist, this can lead to a decrease in the video quality as well.

Are Theoplayer HESP and AWS IVS more efficient than LL-HLS for low latency streaming?

Theoplayer HESP and AWS IVS (Internet Video Streaming) are both proprietary technologies developed by Theoplayer and Amazon Web Services (AWS) respectively, that are designed to provide low-latency streaming. They are both built on top of standard low-latency protocols such as WebRTC and QUIC, and both use cloud-based encoding and delivery to reduce the latency.

Theoplayer HESP uses a combination of adaptive bitrate streaming and real-time communication to reduce the latency. It also uses a custom transport protocol over UDP, which can be more efficient than using HTTP over TCP used in LL-HLS.

AWS IVS (Internet Video Streaming) is a managed live streaming service that uses a combination of low-latency protocols and cloud-based encoding and delivery to reduce the latency. It also utilizes adaptive bitrate streaming and low-latency streaming protocols such as WebRTC and QUIC to help reduce the latency.

Both technologies can be more efficient than LL-HLS in certain use cases, but it depends on the specific use case and the network conditions. They may also have limitations such as being not as widely supported as LL-HLS and may require special software and hardware to use.

Is low latency streaming necessary when there is no interaction with viewers and no other transmission of the same content?

Low latency streaming is most useful in situations where there is a need for real-time interaction or near real-time interaction between the viewers and the content, such as live events, gaming, or other scenarios where the audience needs to respond quickly to the content. In these cases, low latency streaming allows for a more seamless and engaging experience for the viewer.

However, in situations where there is no interaction or near-real-time interaction with the viewers, and no other transmission for the same content, low latency streaming may not be as important. In these cases, the focus is more on delivering the highest possible video quality and ensuring a smooth and stable streaming experience, rather than reducing the latency.

In scenarios where the content is pre-recorded and there is no real-time interaction, it's possible that a higher latency streaming protocol such as standard HLS can be used with a view to providing a better video quality.

Reliable, High-Quality Streaming

Multiscreen Adaptive Bitrate (Actual 24x7 Customer Channels Playback)

Cars and Roads - Brands (https://ireplay.tv/carsandroads/brands.m3u8)

© iReplay.TV

Powered by Vod2Live.tv
Trusted by

trusted by Arte for low-latency live streaming
trusted by DJing for VOD2Live, very high quality live streaming, Jukebox TV, private podcasts, paygate, iOS, iPadOS apps
trusted by CNRS for 'Antarctica to World' Live Streaming
trusted by Velocix for consulting and ops services
trusted by Thomson Video Networks/Harmonic for OTT/cloud training

A portion of iReplay.TV's revenues, specifically 1%, is being allocated towards funding research and providing assistance for children's cancer treatment at Gustave Roussy Institute
Learn more about Gustave Roussy cancer Institute