I have a pretty good understanding of decoding with Android MediaCodec and feeding YUV through a Surface into an OpenGL texture. I would like to do something similar with Vulkan. However I have not been successful in finding any documentation or sample code.
My question is: how would I wire up the following pipeline?
MediaCodec Video Decoder ⇨ Surface ⇨ texture ⇨ Vulkan
Details
OpenGL Comparison
For comparison, in OpenGL case an Android Surface is constructed and used like so
textureId = glGenTextures( &textureId )
surface = new Surface( new SurfaceTexture( textureId ) )
mediaCodec.configure( surface )
This is currently not possible, as there is no way to import memory objects from outside Vulkan, or any SDK Vulkan object that can export a Surface. Take a look at VK_KHX_external_memory
and related extensions for how parts of this might work in the future.
EDIT 2018-05-23: This is now possible using the VK_ANDROID_external_memory_android_hardware_buffer
extension and the extensions it depends on. You can use AImageReader_newWithUsage()
to create an AImageReader
compatible with GPU sampling. Get the ANativeWindow
from that AImageReader
and use it as the AMediaCodec
's output surface. Then for each image you receive, get the AHardwareBuffer
and import that into a VkDeviceMemory
/VkImage
pair using the extension.