Low Latency in Video Streaming: What It Is and How to Achieve It

in #video5 years ago

projector-64149_1920.jpg
Image Source

Delivery of actual, real-time streaming experience has always been a problem for the video streaming industry. With time-sensitive video content, such as live sport events, news or interactive shows, viewers expect to keep the playback as close as possible to real-time broadcast.

Network latency represents the time needed for data to travel between the source and the receiver. High latency can cause delays while watching a video. The latency issue is more noticeable when your stream video online. Online video streaming uses both HTTP-based protocols and streaming protocols. Some protocols like Real-Time Messaging Protocol (RTMP) provider fast video delivery, while others deliver high quality pictures.

This post discusses the importance of low video latency in live streaming and how to achieve it.

What Is Video Latency and Why Does It Matter?

Live stream “latency” is the amount of time between capturing a real life event on video camera until you can see it on your screen. For instance try shaking your head in front of a camera and then watch it move from side to side on the screen.

High latency is perfectly acceptable in certain cases such as previously recorded events streaming. It gives you a chance to enhance the video quality through prevention of packet loss. In other cases, high latency video streams, can significantly impact user experience.

For example, imagine you are watching a football game online while your neighbors watch it live over the air. You will hear them celebrate a goal 20 seconds before you see it. Another example is getting the election results on Facebook before you get to see it on your TV screen. In these cases, low latency assure an optimal viewing experience with great viewer engagement.

The European Broadcast Union defines live streaming as 7 seconds from glass to glass. In Over the Top (OTT) media services for streaming over the internet, latency can vary from from 10 seconds to as much as a minute. In the case of real time applications such as video chats, this latency is closer to 200 milliseconds.

What Affects Latency?

A media delivery pipeline contains many different components. While each factor alone may have minimal impact, adding them all together can cause significant increase in latency. The list below lists the main components contributing to latency in most streaming systems.

1. Streaming protocols and output formats
The choice of video encoding protocols and formats greatly impacts video latency. In addition, the error correction type used by the protocol to overcome packet loss and jitter can also add to latency.

2. Video Uploading
Video upload on a wireless network has significantly larger latency as opposed to a broadcasting set-up in a news studio.

3. CDN streaming
Most media broadcasters use content delivery networks (CDN) to deliver content at a large scale. Consequently, content is distributed between different servers, causing additional latency.

4. Network type and speed
The network type you choose to transmit your video impacts both latency and quality. The speed of a network is generally defined by throughput, transmission bandwidth and the time it takes to transmit a packet.

5. Video player buffer
The role of the buffer is to allow streaming with minimal freezing and choppiness. A buffer loads some of the video data before playing, giving the video player more processing time. In most cases the buffer size is defined in player specifications. As a result, optimizing buffer configuration can have a significant impact on latency.

Quality, Latency and Bandwidth: The Trade-off

There are always trade-offs between low latency, video quality and bandwidth. Higher quality video streaming means higher frame rate and resolutions and therefore higher latency and bandwidth requirements. While new and advanced codec technologies try to improve latency, finding the correct balance is also important.

Ultimately, the type of application determines the balance between these three considerations of video encoding. For applications where latency is critical such as video surveillance, quality is often traded in favor of low latency. However, for cases where high broadcast quality matters, latency can be improved slightly to support advanced video encoding and error correction. The right balance between bandwidth efficiency, high quality picture, and low latency deliver viewers a great user experience over any network.

Approaches to Reduce Latency

There are a few ways to reduce video latency without compromising the quality. Ultimately, there will always be some level of sacrifice when trying to reduce latency. In some cases low latency results in reduced redundancy, increased complexity and vendor lock in.

Short Segments

A video segment consists of many video frames. Added together, these segments construct a whole video file. In streaming, video segments have different sizes.

Shorter segments introduce lower latency because most video players buffer a certain amount of segments before starting playback. For example, Apple’s HTTP Live Streaming (HLS) protocol buffers three segments before starting a video playback. As a result, three segments with a duration of two seconds per segment provides a latency of 6 seconds without taking into consideration the time it takes to deliver and transcode the segments.

For example, the DASH protocol is an adaptive bitrate video streaming method. It specifies how much of a video stream needs to be buffered before playback. This means that video player decides how to adapt to the available bandwidth by fetching segments of different bitrates.

Real-Time Communication Protocols

The main focus of protocols like Web Real-Time Communications (WebRTC) is real time distribution of data such as peer-to-peer video chats. This technology is used in applications like Facebook video chat, Google Hangouts,, and many others. Since WebRTC is designed for real-time communications, it offers low latency live video streaming capabilities.

Leverage WebSockets Protocol

WebSockets allow a fast transfer of data across a single TCP connection. The biggest drawback of this protocol is the absence of standardisation for delivering media. However, some organizations leveraged a frame by frame transfer with WebSockets to reduce video latency.
Another downside of this approach is the scaling cost. WebSockets can significantly increase the overall cost of streaming as you scale.

Chunked Encoding

The Common Media Application Format (CMAF) is a standard planned to simplify the delivery of media over HTTP. CMAF reduces latency with chunked encoding. In this process, the video is separated into smaller chunks of set duration. These chunks can then be used for encoding. Each smaller chunk of video is encoded and delivered much faster compared to regular encoding. That way, you can achieve near real-time video delivery with chunked encoding.

Another interesting option in this category is the “Low Latency HLS” or LHLS protocol developed by Twitter. In this solution, video is delivered using chunked encoding similar to CMAF. However, unlike CMAF, this protocol uses transport stream (TS) segments. Companies that require a high level of control and flexibility can benefit from this approach.

Conclusion

As viewers migrate from traditional TV broadcasting to online video streaming, low latency video delivery and playback becomes more essential. There is no one-size-fits-all solution for low latency streaming. The streaming setup and the associated technology needs to be tweaked and tuned to meet customer requirements.

Coin Marketplace

STEEM 0.17
TRX 0.24
JST 0.034
BTC 95500.34
ETH 2808.64
SBD 0.66