My goal is to record for about an hour (or certain file size), then save the recording, convert to mp4, push to an object storage bucket and then delete the old files. (then record the next hour and so on)
I achieved this by writing a shell script that is executed via the exec_record_done function of nginx-rtmp module
my nginx.conf:
...
# RTMP configuration
rtmp {
server {
...
application live {
live on;
recorder all {
record all;
record_path /var/www/html/stream/mp4;
record_max_size 100000K;
record_unique on;
record_suffix _%d%m%Y_%H%M.flv;
exec_record_done /bin/bash /scripts/cleaner.sh $path $dirname $basename;
}
...
}
}
}
http {
...
}
my script:
#!/bin/bash
path=$1
dirname=$2
basename=$3
newfile="${dirname}/${basename}.mp4"
ffmpeg -y -i $path -acodec libmp3lame -ar 44100 -ac 1 -vcodec libx264 $newfile &&
rm $path
TIME=$(/bin/date +%Y/%m/%d)&&
rclone sync -P $newfile objstor:bucket/$TIME/ &&
rm $newfile
When I run the script manually the whole thing works as intended. But when run via exec_record_done the script generates mp4 file when the filesize of recording is reached (as intended), deletes flv (as intended) but then will not push to object storage.
I am stumped as to why it works when I run the script manually, but does not when run via exec_record_done.
I'd add logging to the script to find the problem. For instance one of your arguments could be empty or something else is going wrong with your ffmpeg
.
#!/bin/bash
set -eu
## log errors to /tmp/script_errors file
exec 2>> /tmp/script_errors
path="${1}"
dirname="${2}"
basename="${3}"
newfile="${dirname}/${basename}.mp4"
ffmpeg -y -i "${path}" -acodec libmp3lame -ar 44100 -ac 1 -vcodec libx264 "${newfile}"
rm -f "${path}"
TIME="$(date +%Y/%m/%d)"
rclone sync -P "${newfile}" objstor:bucket/$TIME/
rm -f "${newfile}"