Search code examples
node.jsnestjsprotocol-buffersgrpcgrpc-node

Is there a performance overhead when manually serializing gRPC request & response objects


My goal is to send and receive data to/from a NodeJS microservice. The problem is that both my request and response objects are quite dynamic - fields with string | number union types, maps with a set of predefined keys and then some dynamic ones. Converting this into protobuf3 structures proves to be an uneasy task and furthermore, google.protobuf.Any seems to not be available for @grpc/proto-loader, so I can't utilize it.

However if I send a JSON.stringify-ed payload and then JSON.parse it, there seems to be no problem, as my proto message includes a single string field with the manually serialized payload.

Is this approach acceptable? I know there must be some overhead in the manual serialization but it does seem to get the job done. Am I missing something?

Any help is greatly appreciated.


Solution

  • You may want to determine whether you would be using any less JSON parsing if you were able to use another approach. From what I understand of your goal, this seems unlikely (i.e. your JSON parsing needs are probably similiar either way).

    One thing to bear in mind is that you're creating a hierarchy of schemas in different definition languages: a protobuf schema and (multiple) JSON schemas. One benefit of protobufs is that messages are typed. When a message is overwhelmingly embedded JSON, it is effectively a string type which loses a benefit of protobufs.

    Again, from what I understand from your question, I'd question whether protobufs|gRPC are the most appropriate technologies for you. Although it's uncommon, you can use gRPC without protobuf. But, RPC (not just gRPC) benefit from typing and, because you're almost shipping strings, your methods become Foo(string) string and RPC (gRPC) seem overwrought.