I’m looking to stream multiple audio files in .wav format to a Raspberry Pi for synchronous playback. I’m looking to use Python as my language of choice and TCP sockets, although I understand UDP might be necessary for latency. Anyone who can point me in the right direction/give some input would be much appreciated!
Actually, there are many approaches to do such a thing. For example, you can write your own Python script and send audio frames on a socket. But if you just need to do a stream you can use high-level tools like FFmpeg. It will give you a bunch of solutions to stream your Audio and Video on the network with bare UDP or well-optimized protocols like RTP or RTSP.
For example, if you want to stream the ALSA device plughw:1,0
to your localhost, it would become something like this:
ffmpeg -ac 1 -f alsa -i hw:0,0 -acodec libmp3lame -ab 32k -ac 1 -re -f rtp rtp://localhost:1234
Also, you can use other tools like Alsa recording app (arecord
) and pipe its output to FFmpeg:
arecord -f cd -D plughw:1,0 | ffmpeg -i - -acodec libmp3lame -ab 32k -ac 1 -re -f rtp rtp://localhost:1234
Then you can simply play it with ffplay
:
ffplay rtp://localhost:1234
You can find more info about FFmpeg's streaming feature in here.