Search code examples
chronicle-queue

Chronicle Queue performance when using ByteBuffer


I'm using Chronicle Queue as a DataStore which will be written to once but read many many times. I'm trying to get the best performance (time to read x number of records). My data set (for my test) is about 3 million records , where each record consists of a bunch of longs and doubles. I initially started with "Highest-level" API which was obviously slow , then self-describing" data as mentioned in this Chronicle Documentation and finally using "raw data" which gave the best performance.

Code as below:(Corresponding write() code is omitted for brevity)

 public List<DomainObject> read()
 {
        final ExcerptTailer tailer = _cq.createTailer();
        List<DomainObject> result = new ArrayList<>();

        for (; ; ) {
            try (final DocumentContext ctx = tailer.readingDocument()) {
                Wire wire = ctx.wire();
                if(wire != null) {
                    wire.readBytes(in -> {

                        final long var1= in.readLong();
                        final int var2= in.readInt();
                        final double var3= in.readDouble();
                        final int var4= in.readInt();
                        final double var5= in.readDouble();
                        final int var6= in.readInt();
                        final double var7= in.readDouble();

                        result.add(DomainObject.create(var1, var2, var3, var4, var5, var6, var7);

                    });
                }else{
                    return result;
                }
            }
        }
}

However to improve my Application performance ,I started using ByteBuffer instead of a "DomainObject" and thus modified by read method as below:

 public List<ByteBuffer> read()
{
        final ExcerptTailer tailer = _cq.createTailer();
        List<ByteBuffer> result = new ArrayList<>();

        for (; ; ) {
            try (final DocumentContext ctx = tailer.readingDocument()) {
                Wire wire = ctx.wire();
                if(wire != null) {
                    ByteBuffer bb = ByteBuffer.allocate(56);
                    wire.readBytes(in -> {
                       in.read(bb); });

                    result.add(bb);
                }else{
                    return result;
                }
            }
        }
}

Above code listing took an average of 550 ms vs 270ms for the first listing.

I also tried using Bytes.elasticByteBuffer as mentioned in this post but it was way slower

I'm guessing the second code listing is slower because it has to loop through the entire byte array.

So my question is - Is there a more performant way to read bytes from Chronicle Queue into a ByteBuffer? My data will always be 56 bytes with 8 bytes for each data item.


Solution

  • I suggest you use Chronicle-Bytes instead of raw ByteBuffer. Chronicle's Bytes class is a wrapper on top of ByteBuffer but much easier to use. The problem with your code is you create a bunch of objects instead of stream-processing. I suggest you read with something like:

    public void read(Consumer<Bytes> consumer) {
        final ExcerptTailer tailer = _cq.createTailer();
    
        for (; ; ) {
            try (final DocumentContext ctx = tailer.readingDocument()) {
    
                if (ctx.isPresent()) {
                    consumer.accept(ctx.wire().bytes());                    
                } else {
                    break;
                }
            }
        }
    }
    

    And your writing method could look like:

    public void write(BytesMarshallable o) {
        try (DocumentContext dc = _cq.acquireAppender().writingDocument()) {
            o.writeMarshallable(dc.wire().bytes());
        }
    }
    

    And then your consumer could be like:

    private BytesMarshallable reusable = new BusinessObject(); //your class here
    
    public accept(Bytes b) {
        reusable.readMarshallable(b);
        // your business logic here
        doSomething(reusable);
    }