I've loaded a model for testing on Jupyter Notebook and created a path for the wav audio file I plan on testing:
anger = "C:\\Desktop\\Emotion Speech Recognition\\D_10\\10ANG_XX.wav"
However after I've extracted the features from another Python file, I seem to be getting this error and I'm not quite sure if i'm reading the audio path correctly or If there needs to be a conversion. Please see below. Any advise or explanation would help!
P.S This is my first time doing audio processing. :)
ParameterError Traceback (most recent call last)
<ipython-input-6-d282e0102c33> in <module>
1 # extract features and reshape it
----> 2 features = extract_feature(anger, mfcc=True, chroma=True, mel=True).reshape(1, -1)
~\Desktop\Emotion Speech Recognition\Emotion_Speech_Recognizer.py in extract_feature(file_name, **kwargs)
52
53 if chroma or contrast:
---> 54 stft = np.abs(librosa.stft(floaty_num))
55 new_audio_result = np.array([])
56 if mfcc:
~\anaconda3\lib\site-packages\librosa\core\spectrum.py in stft(y, n_fft, hop_length, win_length, window, center, dtype, pad_mode)
213
214 # Check audio is valid
--> 215 util.valid_audio(y)
216
217 # Pad the time series so that frames are centered
~\anaconda3\lib\site-packages\librosa\util\utils.py in valid_audio(y, mono)
266 if mono and y.ndim != 1:
267 raise ParameterError('Invalid shape for monophonic audio: '
--> 268 'ndim={:d}, shape={}'.format(y.ndim, y.shape))
269
270 elif y.ndim > 2 or y.ndim == 0:
ParameterError: Invalid shape for monophonic audio: ndim=2, shape=(96768, 2)
Just found the solution to my own problem. The .wave file I was using was stereo. So, I ended up converting it to mono using " from pydub import AudioSegment" and setting it to only 1 channel. :)