I have an example that I am working from for decoding mp3 audio with MediaCodec for playing using MediaPlayer and AudioTrack. The example is using getInputBuffers() which is now depricated in API 21+. The new getInputBuffer(int index) returns one buffer instead of an array and the API reference for the MediaCodec still shows the use case for getInputBuffers().
Can anyone explain how I need to go about using the new method? Do I just get the index 0 each time? I started to loop and get each and make an array but there isn't a place, that I have seen, where I can get the length of available buffers.
You shouldn't try to fetch all the buffers.
Prior to API 21, you'd do ByteBuffer inputs[] = codec.getInputBuffers()
. Then index = codec.dequeueInputBuffer()
would return a buffer index, and you'd use inputs[index]
, and finally submit the buffer with codec.queueInputBuffer(index, ...)
.
Notice that you never touch more than one element in inputs[]
at a time, and you're only allowed to touch that element between the dequeueInputBuffer
and queueInputBuffer
calls.
Now instead of having an array of ByteBuffer
objects, where you only use one at a time, you're now supposed to fetch only the single ByteBuffer
you are going to fill. That is, after index = codec.dequeueInputBuffer()
, instead of using inputs[index]
, call codec.getInputBuffer(index)
.
So basically, the array that you used to get returned from getInputBuffers()
still exists internally within the MediaCodec
object, but you don't need to keep the full array available, only fetch the single buffer you're going to use each time.