I am playing back DDS data from a recorded database, and have written a Java program to listen for the data. I am able to receive most of the messages fine, but I am getting some consistent exceptions that look like the following:
PRESCstReaderCollator_storeSampleData:!deserialize
java.lang.IllegalStateException: not enough available space in CDR buffer
at com.rti.dds.cdr.CdrBuffer.checkSize(Unknown Source)
at com.rti.dds.cdr.CdrInputStream.readShortFromBigEndian(Unknown Source)
at com.rti.dds.cdr.CdrInputStream.deserializeAndSetCdrEncapsulation(Unknown Source)
at <my type>.deserialize_key_sample(<my type>TypeSupport.java:456)
at com.rti.dds.topic.TypeSupportImpl.deserialize_key(Unknown Source)
at com.rti.dds.topic.TypeSupportImpl.deserialize_keyI(Unknown Source)
Has anyone seen this or know what might cause this?
EDIT: I should also add that I am currently receiving DDS data via a replayed database, using rtireplay. I started receiving this error after dropping in a new replay configuration that I was given to use. So maybe the question is what replay configuration settings could affect something like this? I am also posting obfuscated @key fields in IDL at request
struct MyType{
Key1 key1; //@key
Key2 key2; //@key
...
}
struct Key1 {
long long m; //@key
long long l; //@key
...
}
//key members only
struct Key2 {
Key1 a; //@key
...
}
Although the stack trace is slightly different, I was able to reproduce a similar case with the following output:
Exception in thread "Thread-5" java.lang.IllegalArgumentException: string length (200)
exceeds maximum (10)
at com.rti.dds.cdr.CdrInputStream.readString(CdrInputStream.java:364)
at stringStructTypeSupport.deserialize_key_sample(stringStructTypeSupport.java:411)
at com.rti.dds.topic.TypeSupportImpl.deserialize_key(TypeSupportImpl.java:1027)
at com.rti.dds.topic.TypeSupportImpl.deserialize_keyI(TypeSupportImpl.java:965)
PRESCstReaderCollator_storeSampleData:!deserialize
Note that I am using 5.1.0 which is a bit more verbose in its error messages.
The conditions under which this occurred were the following:
protocol.serialize_key_with_dispose
set to true, meaning that this inconsistent key is actually used (as opposed to its key hash) for determining the instance in case of a dispose()
resource_limits.type_code_max_serialized_length
and resource_limits.type_object_max_serialized_length
both set to 0. This avoids communication of type information and therefore prevents detection of the inconsistency in the definition. Older versions did not check for type consistency in the first place, even if there resource_limits were set to non-zero.dispose()
dEspecially protocol.serialize_key_with_dispose
is not commonly changed and it seems to be the only reason why this deserialize_key
function might show up in your stack trace. If you check your rtireplay
configuration and find that this particular settings is set to true
, then it is highly likely that the scenario described here is your case.
The serialize_key_with_dispose
setting is to allow for the case where the first sample ever received for a key value happens to be a dispose
. This means that the instance is not yet known. Normally, the actual key values are not propagated with a dispose, but just a hashed key. This might not be good enough to identify which instance the dispose is intended for. Setting this policy to true results in the full key value being propagated with a dispose. It is related to propagate_dispose_of_unregistered_instances
. For more details, see section 6.5.3.5 Propagating Serialized Keys with Disposed-Instance Notifications of the Connext User's Manual