Search code examples
audiostreamingaudio-streaming

How does the browser request an audio stream? And is it doing streaming, or "progressive downloads"?


The following code seems to stream audio from the host (Google Cloud Storage). It can be seen buffering while playing, so it's clearly not downloading the entire file before playing:

<audio controls preload="auto">
  <source src="https://my-audiofile-on-google-cloud-storage.mp3" type="audio/mpeg">
</audio>

I'd love to know:

  • Is it doing "true streaming" or progressive downloads?
  • What kind of request is the audio player making to the storage server?
  • Would any host be able to stream? I'm using Google; wondering how common it is for storage providers to stream content.
  • Can the streaming parameters be changed? (for example, how much buffering is done)

Solution

  • It can be seen buffering while playing, so it's clearly not downloading the entire file before playing

    That's correct. The browser will download some data ahead of time, but won't download the entire file.

    Is it doing "true streaming" or progressive downloads?

    The browser doesn't really know or care what's upstream from it. It requested some MP3 data, and it got some MP3 data. It matters not whether that data came in at a typical live data rate, whether it was recorded live, or whether it was loaded from disk first.

    This is considered "progressive" streaming, that's still "streaming". Don't read too much into this distinction. (And, if there's a context for why you're asking these questions, please tell us so we can provide better answers.)

    What kind of request is the audio player making to the storage server?

    An HTTP request, as specified by your HTML. The URL begins with https://... that's an HTTP request over SSL/TLS.

    Would any host be able to stream? I'm using Google; wondering how common it is for storage providers to stream content.

    This is streaming. Really not sure what you're getting at here.

    Streaming over HTTP is by far the most common method of streaming these days. The only viable alternative is WebRTC which is geared towards low latency live streaming, has serious quality tradeoffs, and usually requires expensive infrastructure. You have pre-recorded files, so this definitely doesn't apply to your use case. Even if you were streaming live, you could still use HTTP via HLS/DASH, or even HTTP Progressive live streaming through something like SHOUTcast/Icecast.

    Can the streaming parameters be changed? (for example, how much buffering is done)

    When you use an HTMLAudioElement, you delegate these parameters to the browser. Usually, it makes decent decisions. The browser knows, for example, whether or not the user is in some sort of data or battery saving mode, or how much memory is available. Sometimes, it may choose not to prebuffer the media data for these reasons. Other times, it may prebuffer extra if resources aren't a problem.

    You can have more control over this by switching to MediaSource Extensions. There are limitations as to codec and container, but you get more control over the buffering and fetching of media data. You might also consider a Service Worker, but note that Service Workers are not always available, so you cannot rely on them being loaded.