Search code examples
androidros

Messages size of sensor_msgs.image/compressedImage type will changed when publish/subscriber in ROS-Android?


I need to get the preview-image-data from Android-phone-camera and publish it by ROS, and here is my sample-code:

@Override
public void onPreviewFrame(byte[] data, Camera camera) {
    if(data != null){
        Camera.Size size = camera.getParameters().getPreviewSize();
        YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, size.width, size.height, null);

        if(yuvImage != null){
            ByteArrayOutputStream baos = new ByteArrayOutputStream();
            ChannelBufferOutputStream stream = new ChannelBufferOutputStream(MessageBuffers.dynamicBuffer());

            yuvImage.compressToJpeg(new Rect(0, 0, yuvImage.getWidth(), yuvImage.getHeight()), 80, baos);
            yuvImage = null;

            stream.buffer().writeBytes(baos.toByteArray());
            try{
                baos.flush();
                baos.close();
                baos = null;
            }
            catch(IOException e){
                e.printStackTrace();
            }

            // compressedImage type
            sensor_msgs.CompressedImage compressedImage = compressedImagePublisher.newMessage();
            compressedImage.getHeader().setFrameId("xxx");    // frame id

            Time curTime = connectedNode.getCurrentTime();    
            compressedImage.getHeader().setStamp(curTime);    // time

            compressedImage.setFormat("jpeg");                // format
            compressedImage.setData(stream.buffer().copy());  // data

            stream.buffer().clear();
            try {
                stream.flush();
                stream.close();
                stream = null;
            }
            catch (IOException e) {
                e.printStackTrace();
            }

            // publish
            System.out.println("-----Publish: " + compressedImage.getData().array().length + "-----");
            compressedImagePublisher.publish(compressedImage);
            compressedImage = null;
            System.gc();
        }
        else{
            Log.v("Log_Tag", "-----Failed to get yuvImage!-----");
        }
    }
    else{
        Log.v("Log_Tag", "-----Failed to get the preview frame!-----");
    }
}

And then, I had subscribed the topic, just to check if the messages had been published completely and correctly. Just like the following code did:

@Override
public void onStart(ConnectedNode node) {
    this.connectedNode = node;

    // publisher
    this.compressedImagePublisher = connectedNode.newPublisher(topic_name, sensor_msgs.CompressedImage._TYPE);
    // subscriber
    this.compressedImageSubscriber = connectedNode.newSubscriber(topic_name, sensor_msgs.CompressedImage._TYPE);
    compressedImageSubscriber.addMessageListener(new MessageListener<CompressedImage>() {
        @Override
        public void onNewMessage(final CompressedImage compressedImage) {
            byte[] receivedImageBytes = compressedImage.getData().array();
            if(receivedImageBytes != null && receivedImageBytes.length != 0) {
                System.out.println("-----Subscribe(+46?): " + receivedImageBytes.length + "-----");

                // decode bitmap from byte[] with a strange number of offset and necessary
                Bitmap bmp = BitmapFactory.decodeByteArray(receivedImageBytes, offset, receivedImageBytes.length - offset);
                ...    
            }
        }
    });
}

I'm so confused about the number of offset. It's means the size of image-bytes had changed after packaged and published by ROS, and if I don't set the offset there're will be wrong to decode a bitmap. And more strangely, sometimes the number of offset had a change too.

I don't know why, and I had read some articles about the jpg structure, and suspect that it's maybe the head-information of the jpg byte messages. However, this problem just happen in ros-android scene.

Anyone have a good idea about this?


OK! I know that the question I asked and problem described previously is terrible, that's why it got 2 negative-points. I'm so sorry about these, and I have to make up my fault by telling you guys more information, and making the problem more clear now.

At first, forget about all the code I pasted before. The problem happened in my ros-android project. In this project, I need to send the sensor messages of compressed image type to ros-server and get the processed image-bytes(jpg format) in publish/subscribe way. And in theory, the size of the image-bytes should be same, and in fact, this had been proved in my ros-C and ros-C# project under same conditions.

However, they're different in ros-android, it's get bigger! For this reason, I can't just decode a bitmap from the image bytes which I had subscribed, and I have to leave out the excrescent bytes with a offset in the image byte array. I don't know why this happened in ros-android or ros-java, and what's these adding part.

I can't find the reason from my code, that's why I pasted my code in detail for you. I really need help! Thanks in advance!

Logcat


Solution

  • Maybe I really need to check the API first before ask question here. This's a so simple question if I had check the properties of ChannelBuffer in API, cause the property arrayOffset had already told me the answer of this question!


    Well, for the sake of be more cautious and somebody of you guys need a clarification of my answer, so I have to make an explanation in details!

    At first, I have to say that, I still don't know how the ChannelBuffer package the data of image bytes array, that's means I still don't know why there're should have the arrayOffset before the array data. Why we can't just get data? Is there really some important reasons for the being of the arrayOffset? For the sake of safety, or efficiency? I don't know, I can't find an answer from the API. So I'm really tired of this question now, and I'm tired of whether you guys make a negative point of this question or not, just let it go!

    Hark back to the subject, the problem can be solved in this way:

    int offset = compressedImage.getData().arrayOffset();
    Bitmap bmp = BitmapFactory.decodeByteArray(receivedImageBytes, offset, receivedImageBytes.length - offset);
    

    OK! I'm still hope that someone who have good knowledge of this can tell me why, I'll be so appreciate of this! If you guys are just tired of this question like me, so just let's vote to close it, I'll be appreciate it too! Thanks anyway!