Search code examples
javamemorybenchmarkingdeep-copy

Is this a right approach to do a deep copy in java for benchmarking memory?


Since System.arraycopy() and clone() does only shallow copying, I wonder if this approach would work for doing a deep copy.

ByteArrayOutputStream bos = new ByteArrayOutputStream(); 
ObjectOutputStream oos = new ObjectOutputStream(bos);
long x=System.nanoTime();
oos.writeObject(fromArray);
oos.flush();
ByteArrayInputStream bin = new ByteArrayInputStream(bos.toByteArray());
ObjectInputStream  ois = new ObjectInputStream(bin);
Object o=ois.readObject();          
double timeTaken= (System.nanoTime()-x)/1000000000.0;

1) Will the variable, timeTaken give me the actual time to do a deep copy?

2) If I pass data say an array of size 1MB like

byte[] fromArray = new byte[1024*1024];

and calculate throughput in Mb/sec like,

double throughput=1/timeTaken;

Will it be reasonable to consider this as a memory benchmarking throughput?


Solution

  • I wonder if this approach would work for doing a deep copy.

    It will work1. It is not the most efficient way to implement deep copying though. (Implementing deep copy by hand is probably an order of magnitude faster.)

    Will the variable, timeTaken give me the actual time to do a deep copy?

    It depends. If the JVM has been suitably warmed up, then this should give an accurate measure. But it is a measure of this way of doing deep copy ... not deep copy in the general sense. (And see above ...)

    Will it be reasonable to consider this as a memory benchmarking throughput?

    No. The work involved in object serialization and deserialization is far to heterogeneous to be considered a valid proxy for memory system performance.

    As a comment suggested, you would be better off bulk copying data from one array to another using System.arraycopy. Better yet, do your benchmarking in a language that is "closer to the metal"; e.g. in C or assembly language.


    1 - I'm assuming that the object graph you are copying is fully serializable.