I am having a weird issue with the Cassandra node.js driver. When I try to select data from certain partitions, I get this error.
Error: There was an problem while parsing streaming frame, opcode 8
at DriverInternalError.DriverError (C:\Temp\cassandra_test\node_modules\cassandra-driver\lib\errors.js:14:19)
at new DriverInternalError (C:\Temp\cassandra_test\node_modules\cassandra-driver\lib\errors.js:68:30)
at Parser._transform (C:\Temp\cassandra_test\node_modules\cassandra-driver\lib\streams.js:149:16)
at Parser.Transform._read (_stream_transform.js:167:10)
at Parser.Transform._write (_stream_transform.js:155:12)
at doWrite (_stream_writable.js:307:12)
at writeOrBuffer (_stream_writable.js:293:5)
at Parser.Writable.write (_stream_writable.js:220:11)
at Protocol.ondata (_stream_readable.js:556:20)
at emitOne (events.js:96:13)
name: 'DriverInternalError',
stack: 'Error: There was an problem while parsing streaming frame, opcode 8\n at DriverInternalError.DriverError (C:\\Temp\\cassandra_test\\node_modules\\cassandra-driver\\lib\\errors.js:14:19)\n at new DriverInternalError (C:\\Temp\\cassandra_test\\node_modules\\cassandra-driver\\lib\\errors.js:68:30)\n at Parser._transform (C:\\Temp\\cassandra_test\\node_modules\\cassandra-driver\\lib\\streams.js:149:16)\n at Parser.Transform._read (_stream_transform.js:167:10)\n at Parser.Transform._write (_stream_transform.js:155:12)\n at doWrite (_stream_writable.js:307:12)\n at writeOrBuffer (_stream_writable.js:293:5)\n at Parser.Writable.write (_stream_writable.js:220:11)\n at Protocol.ondata (_stream_readable.js:556:20)\n at emitOne (events.js:96:13)',
message: 'There was an problem while parsing streaming frame, opcode 8',
info: 'Represents a bug inside the driver or in a Cassandra host.',
There is some really weird behavior here.
fetchSize
s. E.g., I can consistently get it to error out with a fetch size of 312 or higher, but using a fetch size of 311 works every time.fetchSize
is consistent. E.g., I can run SELECT * FROM TableX WHERE myKey = value1
or SELECT * FROM TableX WHERE myKey = value2
and both will start to error out at a fetchSize
of 240, say. But for TableY
, the limit might be 284.I'm really kind of at a loss here. I would suspect it has something to do with returning too many rows from a single partition, but I am able to get plenty of data from other partitions (e.g., a fetchSize
of 20000 works fine on most partitions).
A simple bit of code to reproduce the issue is
var cassandra = require('cassandra-driver')
var client = new cassandra.Client({ contactPoints: ['node1', 'node2', 'node3'], keyspace: 'ks' });
var query = 'SELECT * FROM TableX WHERE myKey = :value';
client.eachRow(query, { value: 'someValue' }, { prepare: true, fetchSize: 500 }, function(n, row) {
console.log(row);
}, function(err) {
console.log(err)
});
As discussed with OP, it looks like a bug in version 3.1.3 of the driver:
More info: https://datastax-oss.atlassian.net/browse/NODEJS-310
It's now fixed in v3.1.4 of the driver.