In early March 2026, Iranian drones struck three AWS facilities in the Middle East. Two in the UAE were hit directly. One in Bahrain took damage from a nearby explosion. Fire, structural damage, water damage from firefighting, power disruptions. The whole package.
AWS told customers to back up their data, consider migrating workloads to other regions, and direct traffic away from Bahrain and the UAE. That is AWS, the largest cloud provider on the planet, telling you in plain language: we cannot guarantee your uptime here.
This is the first confirmed military strike on a hyperscale cloud provider. It will not be the last.
The cloud has an address
The streaming industry has spent a decade pretending “the cloud” is some abstract, infinitely resilient layer that just works. It is not. The cloud runs on physical servers, in physical buildings, with physical power supplies. And those buildings have coordinates that can be plugged into a drone’s navigation system.
Banking apps, payment services, delivery platforms, enterprise software: they all went dark across the Gulf region when those drones hit. Streaming services running origin servers or packaging pipelines in me-central-1 or me-south-1 were no exception.
If your HLS origin sits in a single AWS region, your stream is exactly as resilient as the concrete walls of that datacenter. That is not a metaphor anymore.
Why streaming is especially vulnerable
A website going down for 30 minutes is painful. A live stream going down for 30 seconds is a catastrophe. Viewers leave. They do not come back for the rest of the event. Ad revenue evaporates. Contractual penalties kick in.
Streaming has unique fragility points that generic cloud resilience advice does not cover:
Manifest continuity. When a CDN fails mid-stream, the player needs to fetch the next segment from somewhere else without breaking the ABR session. If your manifests are not designed for multi-CDN delivery, a failover means a full player restart for every viewer.
Origin shielding dependency. Most architectures use a single origin shield between the packager and the CDN edge. If that shield sits in a region that goes offline, your edge nodes have nothing to pull from. The cache eventually expires and you are done.
DRM license servers. Widevine and PlayReady license acquisition happens at stream startup and at key rotation intervals. If your license server runs in one region and that region goes dark, new viewers cannot start playback. Existing viewers get cut off at the next key rotation.
Ad insertion infrastructure. SSAI decision servers, ad tracking beacons, companion ad APIs: these all have their own infrastructure dependencies. A stream can technically stay up while the ad pipeline collapses, turning your monetized stream into free content.
What to do about it
The good news: none of this is unsolvable. The bad news: most streaming operators have never tested any of it.
1. Know your actual dependency chain
Before you can fix anything, you need to see the problem. Most streaming engineers have a rough mental model of their architecture but have never actually mapped every origin, every CDN, every DRM endpoint, every ad server, and every DNS dependency.
Run your stream URL through a proper analyser. Look at the full manifest tree. Check where each segment is actually being served from. Identify which CDN is doing the heavy lifting. See whether your redundancy is real or just a line item on a slide deck.
Test your stream resilience now on iReplay.TV Stream Analyser →
The analyser will show you the CDN serving your segments, the origin chain behind your manifests, your segment durations (which directly impact failover time), and whether your stream could survive a regional outage. Five minutes of analysis can save you from discovering your single points of failure during a live event.
2. Implement real multi-CDN, not checkbox multi-CDN
Having two CDN contracts is not a multi-CDN strategy. A real multi-CDN setup means:
- Your manifests contain segment URLs that can be resolved to multiple CDN endpoints
- Your player or manifest manipulation layer can switch CDN mid-session without interrupting playback
- You have tested failover under real load, not just on a whiteboard
- Your origin can handle the thundering herd when all traffic suddenly shifts to the surviving CDN
Most streaming operators discover during their first real outage that their “multi-CDN” is actually two CDNs with manual DNS switchover and a 30-minute TTL. That is not resilience. That is hope.
3. Distribute your origin and packaging
If your live packager runs in a single cloud region, you have a single point of failure. Period. Run redundant packaging in at least two geographically separated regions. Use separate cloud providers if you can stomach the operational complexity.
For VOD, make sure your origin storage is replicated cross-region with automatic failover. S3 cross-region replication is the obvious AWS answer, but after March 2026, the smarter question is: should your backup origin even be on AWS?
4. Audit your DRM and ad infrastructure
DRM license servers and SSAI decision engines are the hidden single points of failure in most streaming architectures. They are often hosted in one region, by one provider, with no failover plan beyond “it has never gone down.”
Until a drone hits the building.
Check where your Widevine/PlayReady proxy runs. Check where your SSAI decision server lives. Check whether your ad beacons can survive a regional outage. The stream analyser can help you spot some of these dependencies.
5. Design for degraded mode, not just full uptime
Infrastructure resilience is about keeping the stream alive. But real-world resilience also means having a plan for when the stream cannot stay alive. The best news and sports apps do not just go black when the CDN drops. They degrade gracefully.
Offline playback for short-form content. Brief news clips, highlights, pre-recorded bulletins: these can be pre-downloaded to the device and served locally when connectivity degrades or backend infrastructure fails. HLS supports offline playback natively on Apple platforms, and most modern players handle it on Android too. If your app delivers news or short-form content, there is no excuse for not caching the latest batch of clips on the device. When the datacenter in Bahrain goes dark, your users still have something to watch. The key is to refresh the offline cache aggressively during normal operation so the content stays relevant.
Push notifications as an alternative delivery channel. When your streaming infrastructure is partially down, push notifications become your emergency broadcast system. A well-designed notification strategy can redirect users to working mirrors, deliver text-based news summaries, or simply acknowledge the outage and set expectations. Push infrastructure (APNs, FCM) runs on Apple and Google’s own systems, completely independent from your streaming backend. If your CDN fails but your notification pipeline still works, you can keep your audience informed and engaged instead of letting them churn to a competitor in silence.
Audio-only fallback. A full video stream at 4 Mbps is a lot of infrastructure to keep alive under stress. An audio-only stream at 64 kbps is roughly 60 times cheaper to deliver and can run on a fraction of the bandwidth and server capacity. For news content especially, audio-only is a perfectly acceptable degraded mode. Many viewers already listen to news streams while commuting or multitasking. Building an explicit audio-only rendition into your ABR ladder means your service stays alive even when video delivery is compromised. It also opens the door to delivery over protocols that are more resilient to packet loss, or even over plain podcast infrastructure as a last resort.
Here is the problem: a surprising number of streams still run on legacy MPEG-2 Transport Stream packaging, where audio and video are muxed together. No way to request audio only. No way to degrade gracefully. The player downloads the full muxed segment or nothing. If your stream is still on MPEG-2 TS with no standalone audio rendition, you are missing the single cheapest resilience lever available to you. Moving to fMP4/CMAF with a separate audio-only variant in your master playlist is the fix. The iReplay.TV Stream Analyser will tell you in seconds whether your stream has an audio-only rendition or if you are still stuck in TS-only territory.
These are not consolation prizes. They are the difference between “the app is broken” and “the app still works, just differently right now.” Users forgive temporary degradation. They do not forgive silence.
The new reality
The Iran conflict forced a conversation the streaming industry was not ready to have. Cloud infrastructure is not invincible. Geographic diversification is not optional. And “it probably won’t happen to us” is not a resilience strategy.
The IRGC explicitly named US tech companies as legitimate military targets. Google, Microsoft, and Oracle all run datacenters in the same region. The next strike could hit a different provider, a different region, or a submarine cable landing point. The fiber routes in and out of the Gulf are limited, and they are not getting less vulnerable.
If your streams rely on infrastructure in the Middle East, or if your global architecture has hidden dependencies on a single cloud provider, now is the time to find out. Not during the next strike.
Analyse your stream’s resilience → ireplay.tv/tools/stream-analyser