I believe you still pay Cloudflare costs, but the traffic between Cloudflare and B2 is free on both platforms. But it might be worth double-checking the fine print.
> So you would need to chunk up your videos to get free bandwidth.
Having a functional video player on your site or in your app (e.g. one where you can skip to arbitrary times without requiring the video be buffered up to that point; or where a video can be "resumed" from the middle if you leave it and come back) already requires that you use MPEG-DASH or HLS; which in turn implies/necessitates pre-chunking, no?
Is there some use-case where people are currently serving 400MB contiguous video files from a CDN? I can't think of one. YouTube doesn't. Netflix doesn't. Even porn sites don't.
I guess Archive.org has some large video files on there in various places, that can be direct-downloaded; but the recommendation Archive.org itself makes, is to consume those via BitTorrent. Presumably they don't have a CDN partner willing to handle their unique workload for cheap.
You don’t need DASH or HLS to seek to an arbitrary point quickly. The mp4 container format has an index which stores playhead time -> byte. This index is either at the end of the file or beginning, and tools like qt-faststart move the index to the beginning, which makes videos start much quicker when serving over http. Browsers will use the index to issue range get requests an be able to seek just fine.
Serving a 400mb video via a CDN is highly dependent on the CDN. Some will construct a cache entry from a slop of range get requests and translate them to get the missing pieces and work brilliantly and other CDNs should be avoided.
> Having a functional video player on your site or in your app (e.g. one where you can skip to arbitrary times without requiring the video be buffered up to that point; or where a video can be "resumed" from the middle if you leave it and come back) already requires that you use MPEG-DASH or HLS; which in turn implies/necessitates pre-chunking, no?
Browsers are smart. They only buffer a few megabytes at a time and can seek around pretty efficiently.
That requires the video to be encoded in a way where you can just start reading the stream from any random byte offset, and everything will still work. Video files are not usually encoded this way (any more.) Resume an MP4 or MKV video half-way through, without reading the TOC-ish stuff from the first chunk, and you'll get garbage that maybe resyncs after 20 seconds.
It's totally possible to "encode for streaming", but it usually results in both an increase in overhead [more keyframes] and a decrease in quality [inability to use predictive interpolation, instead relying only on forward-interpolation.]
Mind you, this streaming-enabled encoding is how things were done on the web, before the advent of MPEG-DASH/HLS; and it's still how e.g. the MP2 encoding of digital cable/satellite video works. But we don't really want to go back to those days. They kind of sucked.
Jumping to random byte offsets in a video also tends to screw with any embedded data streams like subtitles or thumbnails, which tend to just be stored in most media container formats as a single chunk at the beginning/end of the file, rather than being spread or copied across the stream. Again, the kind of captioning done back in the MP2 days is immune to this, but it kind of sucked as well (e.g. it wouldn't trigger if you happened to skip to the millisecond after the instruction for it appeared in the stream, often leaving you with ~30 seconds of untranslated audio.)
I don't think this is quite right. If you serve an mp4 with H.264 video statically on any basic webserver it will just work in browser through plain HTTP, without need need for MPEG-DASH/HLS. Every widely used mediaplayer/browser just downloads the nearest chunk (keyframe point) behind the time that was seeked to can resume the playback from there. This point is found through an index stored in a container format. For basically every video format these days (say, at least as new as H.264), regular settings make this only a few more seconds of video to download and decode before the seeked point and basically happens instantly for normal online consumption. In H.264 forward prediction (through two-pass encoding) will playback fine too.
I think what you're saying applies more to a setting where the video is being streamed live, so that you cannot access the start of the file to get keyframe metadata. In that case HLS and MPEG-DASH help.
That's odd. I can't recall ever having a problem with browsers playing mp4s in a vanilla <video> tag, as long as I encode them in the main h264 profile, AAC audio, and MOOV atom at the front (see [0] for ffmpeg command). Obviously the server has to support Range: byte requests.
My impression is DASH/HLS are mostly useful for adjusting bitrate on the fly.
When the parent talks about jumping to random byte offsets, they mean you don't have the first part of the file at all. You just have an arbitrary 512-MB chunk out of the middle.
But they were claiming that a single monolithic file would break too, which is not the case. The browser does a range request to get the first part, then a range request to get the part you're playing, and it works.
Right. Like tuning into a digital-cable signal "in the middle"†. You just get bytes of the stream starting from an arbitrary offset, without having seen/processed anything before that (and without even being able to request anything before that), and you need to resynchronize from what you've got.
† I mean, a digital-cable video stream is always "in the middle" unless you're just starting a VOD stream, but still.
The browser just reads the TOC-ish stuff from the first chunk. Trust me, it works. I regularly load plain old multi-hundred-megabyte mp4s in my browser, off a web server, and skip around without problems. The default keyframe interval from x264 is fine. You don't have to do any horrible things to the encoding, you just have to start loading a few seconds before the seek point. Which the browser does automatically.