Flipboard Blog

Australia, New Zealand, China, Africa: how to adapt live streaming for remote locations

There are several potential issues that may be encountered when live streaming to or from distant locations. These include:
  • Latency: Streaming over long distances can result in latency, which is the delay between the live event and the time it is received by the streaming server and the viewer. Latency also causes reduced bandwidth, packet losses, leading to dropped frames on the broadcaster side or the viewer side.”
  • Geoblocking: Some countries have restrictions in place that prevent streaming content from being viewed within their borders. This can make it difficult for viewers in these countries to access the live stream. As an example, in order to distribute content within China, a valid Internet Content Provider (ICP) license is required.
  • Cost: Streaming live video can be costly, particularly in areas where internet infrastructure is limited.

A failover architecture is A MUST for anyone expecting playback reliability


Closer to the streaming server


For optimal performance and minimal latency, a live streaming origin server (such as RTMP, SRT, or HLS publishing URL) should be situated as close as possible to the source of the live video feed, typically at the location of the event or where the video is being produced. Placing the origin server near the source allows for the rapid capture and transmission of the video feed to the CDN, thus decreasing the time required for the video to reach the viewer and enhancing the overall streaming experience. Another strategy is to have multiple origin servers located in various regions, so that if the source of the video is distant from the majority of the audience, the video feed can be directed to the nearest origin server for distribution.

Closer to the viewer


Employing local and border-based CDN Points of Presence can enhance the streaming experience by minimizing latency and maximizing bandwidth. This is achieved by placing the CDN servers closer to the audience. Additionally, utilizing multiple CDNs can also aid in traffic distribution, thereby increasing streaming dependability.

Methods for adapting to streaming in remote locations.


Using buffers on the broadcaster encoder, streaming server, and viewer's player can help to reduce the effects of network variations for live streaming. Buffers act as temporary storage for video data and can help to smooth out any discrepancies in the flow of data.

For instance, when the network is unreliable for the broadcaster, the encoder buffer can temporarily store video data, enabling the encoder to maintain a consistent transmission rate or use a protocol more tolerant of network fluctuations, such as HLS, rather than having to slow down or interrupt when network conditions fluctuate. Similarly, buffers on the streaming server and viewer's player can help to alleviate the impact of network congestion by temporarily storing video data.

However, it's important to note that using buffers can also have some downsides, such as increasing the delay (latency) of the stream.
In general, using buffers can be a good way to reduce the effects of network variations, but it's important to find the right balance between buffering and latency. The best approach will depend on the specific requirements of your live streaming event and the audience and requires talks with a streaming expert.

How to leverage cache mechanisms for HLS audio and video segments?


Leveraging cache mechanisms for HLS (HTTP Live Streaming) audio and video segments can significantly improve the performance and reliability of the audio and video delivery. Here are some steps to consider when leveraging cache mechanisms for HLS audio and video segments:

Use caching servers: A caching server can be used to cache HLS audio and video segments, reducing the load on the origin server and improving the delivery performance.

Utilize HTTP Headers: Proper utilization of HTTP headers such as "Cache-Control" and "Expires" headers can help ensure that the cached segments are stored and retrieved correctly. The "Cache-Control" header can be used to specify the maximum age of the cached segments, while the "Expires" header can be used to set a specific expiration time for the cached segments.

Monitor cache performance: Regular monitoring of cache performance such as cache hit rate and cache size can help ensure that the cache is working efficiently and that the cache size is adequate for the current demands.

Store popular segments for a longer time: Popular audio and video segments can be stored in the cache for a longer time compared to less popular segments. This can be achieved by using a caching algorithm that takes into account the popularity of the segments and the time since they were last requested.

Use an intelligent caching algorithm: An intelligent caching algorithm, such as Least Recently Used (LRU) or Least Frequently Used (LFU), can be used to manage the cache, ensuring that the most popular and frequently accessed segments are stored in the cache and that the less popular segments are evicted to make room for new content.

Use VOD2Live (think Youtube Premiere) to cache content before it is displayed to viewers: by using VOD2Live (prerecorded video broadcasted as a live stream) and playing the content within the scope of the event (e.g., corporate LAN), you can ensure that all viewers will be able to enjoy high-quality playback on their devices and computers. This is because the content is cached beforehand, and all media segments will be retrieved from your cache server(s).

By leveraging cache mechanisms for HLS audio and video segments, organizations can improve delivery performance and reduce the load on the origin server, providing a better user experience for their audience.

Use "compute at the edge" mechanisms


The widespread adoption of external content delivery networks (CDNs) has significantly reduced control over content streaming methods. However, in certain circumstances, it is crucial to have some level of control, which can be achieved through live manifest generation. The most effective approach to managing challenging deliveries is through dynamic live manifest manipulation, which now includes advanced features like HLS Content Steering that simplify the process. This means that online video platforms must either retain control over the manifest generation or invest in expensive options like "compute at the edge" to add this capability to their CDN providers.

How to easily test the reliability of your online video plartform in any country?


Install a VPN of your choice, ideally with PoPs in difficult-to-reach countries. I would recommend CyberGhost VPN and launch it.

Open your Chrome Developer Tools window on the side of a fullscreen player, then check the network tab for variant playlists (m3u8 requests) and video and audio segments (m4s for modern HLS, ts for legacy HLS - before 2016). Start the playback, put your headphones on, then switch from one country to another during the playback. Observe:
  • the continuity of the stream
  • the quality switching that might happen
  • recovery with or without video segment losses from a lost connection (red lines)
  • Potential failover between streaming servers or CDN (by checking the full URL) in some countries)


From this test, you will not only know if your OVP is able to stream for your target audience in select countries but also request strengthening the bandwidth or reliability if necessary

Please see an example of a VOD2Live FAST channel playback by iReplay.TV in different countries of the world below



Article written by
Sylvain CorvaisierCorvaisier Sylvain Independent Streaming Engineer
LinkedIn
Independent streaming and iOS engineer

Last modified: January 7th, 2025