Goal: Streaming Android Camera to Wowza Server in proper orientation no matter the orientation of device (i.e. the video is always right side up)
I've looked at all the questions on here regarding camera orientation and so far they all seem to just change the preview which is rendered to the screen or set a flag in the MP4 file (not appropriate for my use case: streaming).
I'm streaming camera frames to a Wowza server and on the Wowza server the received video is always landscape. This is fine if the phone is always held in the same orientation but I can't guarantee my users will do this. From what I've gathered, when you grab frames directly from the camera and feed them to the encoder, you are getting the natural orientation of the device's camera (which may be mounted landscape in my case) and this is completely unaffected by the preview. This is problematic because if the device is rotated during the stream, the image is rotated with it.
I have tried using openGL matrix to transform the preview in a custom GLSurfaceView.Renderer and all it does is transform the View on the screen, not the frames sent to the encoder.
I have read over Grafika's examples and am not sure where in the process I need to rotate frames before they are fed to the encoder. I am using a SurfaceTexture as a camera preview which is then rendered to a GLSurfaceView (and using my own custom GLSurfaceView.Renderer).
How can I rotate the frames to the encoder?
Ideally I would do this in openGL. I had thought about rotating the frames before filling mediacodec.dequeueInputBuffer buffers but this will be done using the CPU and I'm wary because of the real-time application. Maybe I am overlooking something with regards to the preview. I've seen other broadcasting applications tear down the Preview layer on the UI and rebuild it whenever the device is rotated.
References:
The camera Activities in Grafika render every frame twice: once for the display, and once for encoding. You want to do the same thing, but throw in a rotation (and make an appropriate adjustment to the MediaCodec / MediaRecorder configuration, i.e. 720x1280 vs. 1280x720).
For example, consider the drawFrame()
method in ContinuousCapture. It selects the SurfaceView's EGLSurface, draws, then switches to the video encoder's EGLSurface and calls the same draw method. It even throws in a call to drawExtra()
just to show that it can.
In both cases you're just rendering GLES onto an EGLSurface, and the EGLSurface gets its dimensions from the underlying Surface, which either came from SurfaceView or MediaCodec. If you modify FullFrameRect#drawFrame()
to take a matrix argument, and pass in a rotation matrix in place of the identity matrix it currently uses, you should get the result you want.