I'm building a software to remotely control radio hardware which is attached to another PC.
I plan to use ZeroMQ for the transport and an RPC-like request-reply with different messages on top of it which represent the operations.
While most of my messages will be just some control and status information, there should be an option to set a blob of data to transmit or to request a blob of data to receive. These data blobs will usually be in the range of 5-10MB but it should be possible to also use larger blobs up to several 100MB.
For the message format, I found the google protocol buffers very appealing because I could define one message type on the transport link which has optional elements for all the commands and responses. However, the protobuf FAQ states that such large messages will negatively impact performance.
So the question is, how bad would it actually be? What negative effects are there to expect? I don't really want to base the whole communications on protobuf only to find out that it doesn't work.
I don't have time to do this for you, but I would browse the Protobuf source code. Better yet, go ahead and write your code using a large bytes
field, build protobuf from source, and step through it in a debugger to see what happens when you send and receive large blobs.
From experience, I can tell you that large repeated Message
fields are not efficient unless they have the [packed=true]
attribute, but that only works for primitive types.
My gut feeling is that large bytes
fields will be efficient, but this is totally unsubstantiated.
You could also bypass Protobuf for your large blobs:
message BlobInfo {
required fixed64 size;
...
}
message MainFormat {
...
optional BlobInfo blob;
}
then your parsing code looks like:
...
if (message.has_blob()) {
uint64_t size = msg.blob()->size();
zmqsock.recv(blob_buffer, size);
}