I am new to audio files and its codecs.
I would like to convert a 2 channel mp4 file to a single mono wav files.
My understanding is a when I say 2 channel, it stores speech coming from each microphone in a separate channel. And when I split the channels to each individual mono wav files, I get speech of each microphone.
My intension here is to get the speech from each channel and convert them to text. This way I can set the name of the speaker based on channel.
I tried with ffmpeg and python code as well, unfortunately I get two files with same content.
Looking at the following details can someone construct ffmpeg command or python script to convert the 2 channel mp4 file to 2 individual mono wav files.
FFprobe ffprobe -i Two-Channel.mp4 -show_streams -select_streams a
Result
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
encoder : Google
Duration: 00:52:42.19, start: 0.000000, bitrate: 421 kb/s
Stream #0:0[0x1](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 640x360 [SAR 1:1 DAR 16:9], 322 kb/s, 25 fps, 25 tbr, 12800 tbn (default)
Metadata:
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0]
Stream #0:1[0x2](eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 96 kb/s (default)
Metadata:
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0]
[STREAM]
index=1
codec_name=aac
codec_long_name=AAC (Advanced Audio Coding)
profile=LC
codec_type=audio
codec_tag_string=mp4a
codec_tag=0x6134706d
sample_fmt=fltp
sample_rate=44100
channels=2
channel_layout=stereo
bits_per_sample=0
initial_padding=0
id=0x2
r_frame_rate=0/0
avg_frame_rate=0/0
time_base=1/44100
start_pts=0
start_time=0.000000
duration_ts=139452416
duration=3162.186304
bit_rate=96000
max_bit_rate=N/A
bits_per_raw_sample=N/A
nb_frames=136184
nb_read_frames=N/A
nb_read_packets=N/A
extradata_size=16
DISPOSITION:default=1
DISPOSITION:dub=0
DISPOSITION:original=0
DISPOSITION:comment=0
DISPOSITION:lyrics=0
DISPOSITION:karaoke=0
DISPOSITION:forced=0
DISPOSITION:hearing_impaired=0
DISPOSITION:visual_impaired=0
DISPOSITION:clean_effects=0
DISPOSITION:attached_pic=0
DISPOSITION:timed_thumbnails=0
DISPOSITION:non_diegetic=0
DISPOSITION:captions=0
DISPOSITION:descriptions=0
DISPOSITION:metadata=0
DISPOSITION:dependent=0
DISPOSITION:still_image=0
TAG:language=eng
TAG:handler_name=ISO Media file produced by Google Inc.
TAG:vendor_id=[0][0][0][0]
[/STREAM]
FFmpeg command
ffmpeg -i Two-Channel.mp4 -filter_complex "pan=mono|c0=0c0" left_channel.wav
python code using FFPMEG I converted mp4 to wav and then tried below code
Are you sure that your source file really does have each speaker on a separate channel?
I don't see anything wrong with a cursory look at your ffmpeg command, but I didn't try running it.
I've used this command to separate audio channels:
ffmpeg -i Two-Channel.mp4 -map_channel 0.0.0 ch_1.wav -map_channel 0.0.1 ch_2.wav
It's straight from the ffmpeg wiki