When 4,000 employees join the same all-hands at the same time, somebody is paying somebody for those bytes. Usually the somebody is a Microsoft Stream or Zoom Webinar or Vimeo Enterprise or Brightcove subscription, the bytes briefly leave the corporate network, bounce off a third-party datacenter, and come back into the same network on a different connection. The CEO is in a meeting room two doors down, and the video of him goes to Frankfurt and comes back.
This works fine until somebody asks the question that doesn’t have a great answer. “Why does our internal town hall traffic transit a third-party SaaS?” Compliance asks it before a regulated industry audit. Security asks it after every breach disclosure that mentions a streaming vendor. The CFO asks it when the per-viewer SaaS bill renews. IT asks it the first time the building’s WAN link saturates because the all-hands hit a record turnout.
The honest answer is that corporate live streaming evolved around RTMP encoders and SaaS platforms because the alternatives were either expensive (Cisco enterprise video, Kollective and Hive eCDN appliances, internal CDN gear) or homemade in a way nobody on a typical IT team had the bandwidth to maintain. Enterprises picked SaaS because nothing else was within reach.
That has changed. The encoder, the transcoder, the packager and the publisher have collapsed into a single piece of software running on a Mac. The destination doesn’t need to be a vendor. Pointed at a web server inside your firewall, the same software produces broadcast-standard HLS that your employees stream over the corporate LAN, and nothing leaves the network.
This article is about doing exactly that.
What “internal corporate live” actually looks like
Most internal video at a typical company falls into a small list of recurring use cases.
Quarterly all-hands and CEO town halls. Big audiences (often the entire company), low frequency (four to twelve times a year), modest production values. Latency tolerances are loose, fifteen to thirty seconds is fine because there is no two-way interaction during the keynote and Q&A is handled in a separate channel.
Training sessions and certification courses. Sometimes live, often re-watched. A few hundred concurrent viewers maximum. Production is one camera, one presenter, slides shared from a laptop.
Product launches and engineering announcements. Internal-only because the announcement is for staff before it reaches the press. Sensitive about leaks.
Compliance and HR briefings. Annual ethics training, regulatory updates, health-and-safety. Audit trail matters more than production polish.
Security incident updates. Internal-only by default because the content is incident details. Has to stay inside.
In each of those use cases, the audience is internal employees on the corporate network or on the corporate VPN. The content is internal by policy. Sending the bytes to a SaaS, having the SaaS serve them back to your employees, paying per minute, and asking the legal team to add another data processor agreement is the path of least resistance, not the right architecture.
What an on-prem alternative looks like in 2025
The architecture is small enough to draw on a sticky note.
One Mac in the meeting room. The speaker’s MacBook, a USB camera, the building’s AV system feeding audio into the Mac, or any combination thereof. Apple silicon has a hardware H.264, HEVC and AV1 encoder identical to what Final Cut Pro uses for export. Encoding 1080p at 5 Mbps, 720p at 3 Mbps and 540p at 1.8 Mbps simultaneously, in real time, is what the chip is designed for.
One web server inside the firewall. Anything that
accepts HTTPS PUT and serves files. A repurposed file server, a small
Linux VM, an existing internal IIS or nginx that already serves intranet
content. The endpoint receives small fragmented MP4 segments (six
seconds each) and tiny .m3u8 playlist files. Storage is
trivial: a one-hour broadcast at three quality levels uses about 4 GB.
The server doesn’t transcode, it doesn’t repackage, it just stores and
serves.
An HLS player embedded in the intranet page. Any modern HLS player works: hls.js for browsers without native HLS, the browser’s built-in support on Safari, the Apple TV in the cafeteria, the meeting-room display systems most companies already run.
That is the entire architecture. The bytes never leave the network. There is no streaming server to license, no cloud transcoder to bill, no eCDN agent to install on every employee laptop, no SaaS subscription. The Mac runs My Live TV Channel, which encodes the stream and uploads it. The web server you already have does the rest.
Why this works at scale on a corporate LAN
The first reaction from infrastructure people is usually that one web server can’t handle a town hall audience. On the public internet, that intuition is correct. On a corporate LAN, it is not.
A 1080p HLS variant at 5 Mbps over a 1 Gbps uplink saturates the link at about 200 concurrent viewers (1000 ÷ 5). The same uplink at 10 Gbps carries roughly 2,000 concurrent 1080p viewers. The 720p variant at 3 Mbps brings those numbers to about 333 and 3,300 respectively. In an adaptive-bitrate ladder where viewers are spread across multiple rungs, most corporate laptops will land on 720p once HLS adaptive logic finds the comfortable rung, so realistic concurrent capacity on a single 10 Gbps server is several thousand employees.
If your audience is larger than that, the next step is not a SaaS. It is a second internal server, or your existing internal CDN (most large enterprises already have caching infrastructure for software updates and intranet content), or a single nginx caching tier in front of the upload server. None of this requires a vendor, and all of it is cheaper than per-viewer SaaS billing.
When the audience is too big for unicast
The unicast LAN math above tops out around several thousand concurrent viewers per server. If a global all-hands lands 50,000 employees on the same building’s network at the same minute, or you are streaming to a stadium-sized venue full of company laptops, unicast hits a wall and stacking more servers becomes wasteful.
The traditional answer is multicast: one copy of each segment travels to a multicast group, every viewer subscribes, the network duplicates packets at switches and routers as needed. A single 5 Mbps stream feeds the entire campus from one source, with no per-viewer bandwidth multiplication.
In practice, multicast on a corporate network is a “yes, but” situation:
- It only works inside a multicast-enabled network. Most corporate networks have IGMP and PIM disabled by default for security reasons. Turning them on is a network engineering project, not a setting.
- It does not cross most VPN connections. Remote workers fall back to unicast.
- It does not cross WAN links between offices without explicit MPLS multicast or similar.
- It is mostly useful inside one campus or one building where IT has built the multicast trees deliberately.
If your network supports it, the way to use it with this architecture is to keep the encoder Mac doing what it does (HLS over HTTPS PUT to the internal web server) and add a multicast translator downstream. Open-source tools like tsduck and commercial appliances from network vendors will tail the HLS output, build an MPEG-TS multicast feed, and announce it via SAP or SDP. Endpoints that support multicast (VLC, set-top boxes in meeting rooms, smart TVs in the cafeteria) join the group instead of pulling unicast HLS. Endpoints that do not (most laptops in a browser) keep using the unicast HLS player served from the same web server. One Mac, one upload destination, plus a single multicast translator covers both viewer types.
For most companies this section is theoretical: unicast handles the all-hands without difficulty. For the few large enterprises where it is not theoretical, the architecture extends without changing the encoder.
Setting it up the first time
The first run takes about an afternoon for an IT team that has not done internal streaming before. The rough order:
Pick the destination web server. Any internal web server with HTTPS works. Configure a directory like
/intranet-broadcasts/town-hall-2025-q4/with HTTPS PUT enabled, no authentication if it is behind the firewall, or basic auth if you prefer. nginx with thedav_moduledoes this in roughly ten lines of config. Apache’smod_davis the equivalent.Test the endpoint. From any internal machine,
curl -X PUT https://intranet.corp/intranet-broadcasts/test/index.m3u8 --data "test"should return a 201. If it does, the destination is ready.Configure My Live TV Channel on the presenter’s Mac. Add an HTTP PUT destination pointing at the directory. Save it. Test the connection. Create a Stream Profile with the bitrate ladder you want.
Embed an HLS player on the intranet page. A
<video controls>tag pointing at the playback URL is enough on Safari. For Chrome and Edge, drop in hls.js (one script tag, one line of JavaScript). The whole player is fewer than thirty lines of HTML.Dry-run with the technical team. Stream from the Mac for ten minutes. Have a few internal viewers tune in. Watch the dashboard. Confirm the segments are appearing on the destination server, the player is picking them up, and the latency is acceptable.
That is the entire setup. Subsequent broadcasts reuse the same destination, the same profile, and the same player embed.
What about Q&A, polls, and the rest
The architecture above handles the broadcast, not the interactive layer. For Q&A, polls, and reactions during the town hall, internal companies typically already have a Slack or Microsoft Teams channel that serves perfectly well, runs at sub-second latency over the same internal network, and integrates with the directory the rest of the company uses. Trying to make a streaming SaaS the chat tool too is what makes those products expensive. Splitting the broadcast and the interaction into the tools your company already runs is what makes the architecture cheap.
If you want a 24/7 internal information channel (“CorpTV”), the My TV Channel playout app fills the time between live broadcasts with scheduled content (recorded all-hands archives, training videos, internal news, manager updates). Same destination server, same player embed. Employees who land on the channel page on a Tuesday afternoon find something playing instead of a “stream offline” screen.
What stays your problem
Self-hosting internal video has trade-offs worth being honest about.
Recording and retention. The destination server keeps every broadcast indefinitely unless you set up a cleanup job. That is good for archives and bad for storage if you forget. Most companies want a retention policy on internal broadcasts (typically aligned with regulatory or legal requirements). Set it up once, in cron, and forget it.
Captions and accessibility. Live captioning is not built into the encoder app, and that turns out to be less of a problem than it sounds. Modern browsers and operating systems generate live captions on the client side, on-device, with no server involvement: Chrome’s Live Caption feature works on any video playing in the browser, macOS Live Captions does the same system-wide in Safari, Edge has its own equivalent. Each viewer turns captions on or off in their own browser. If you need server-side captioning for compliance reasons (the captions need to be part of the recording, or you cannot rely on the viewer’s browser), route the audio through a captioning service (Microsoft’s built-in captions, AWS Transcribe, internal tools) and overlay the captions in the player. The HLS layer does not change in either case.
Authentication. Inside the firewall, “anyone on the corp network can watch” is often acceptable. If you need stronger guarantees (executive briefings, M&A announcements), put the playback URL behind your existing SSO or VPN posture checks. Standard web auth, not video-specific.
Cross-region offices. If you have offices on multiple continents joining a single broadcast, you will want either a regional cache server in each region, or a deliberate decision to use the corporate WAN. WANs are typically rate-limited, so the LAN math above does not apply. Plan accordingly.
The honest verdict
The cost question is the easiest one to overstate. SaaS bills for internal streaming exist, but they are rarely the deciding factor on their own. The deciding factor for most companies that move to an on-prem architecture is that internal video should stay internal. The CEO’s voice, the engineering announcement, the M&A briefing, the security incident debrief, the compliance training: none of those need to leave the corporate network to be delivered to the corporate audience that is sitting on it. Once that is the frame, “we send a copy through a vendor’s datacenter and pay for the round trip” is the part that needs justifying, not the on-prem alternative.
Compliance teams ask the same question in regulated industries: which third parties have access to recordings of internal employee meetings? Security teams ask it after every breach disclosure that mentions a streaming vendor. Data residency rules in some regions ask it before the broadcast happens at all. An architecture where the bytes never leave the network removes the question.
If you have those concerns and your IT team is tired of provisioning another vendor’s eCDN agent on every laptop, the practical answer is now small enough to set up in an afternoon. A Mac in the meeting room, a web server you already have, and an app you can try free for a week. The architecture diagram fits on a sticky note. The number of vendors with access to your CEO’s voice goes from “several” to “zero”.
Try My Live TV Channel free for 7 days on the Mac App Store →