Search code examples
c++cryptographyaesbenchmarkingcrypto++

Why AES code in Crypto++ gives different performance results?


I am trying to test performance of AES encryption. But whenever I am running the code it is giving different results.Why? Here's the code in C++ using Crypto++:

 int main(int argc, char* argv[]){
AutoSeededRandomPool prng;

byte key[AES::DEFAULT_KEYLENGTH];
prng.GenerateBlock(key, sizeof(key));

byte iv[AES::BLOCKSIZE];
prng.GenerateBlock(iv, sizeof(iv));

CBC_Mode< AES >::Encryption e;
e.SetKeyWithIV(key, sizeof(key), iv);

CBC_Mode< AES >::Decryption d;
d.SetKeyWithIV(key, sizeof(key), iv);

Time testing is here:

  clock_t startTime, finishTime;    
std::string plain = "AES CBC Test";   
std::string cipher, encoded, recovered;   
startTime = clock();    
try
{

    // The StreamTransformationFilter removes
    //  padding as required.
    StringSource s(plain, true, 
        new StreamTransformationFilter(e,
            new StringSink(cipher)
        ) // StreamTransformationFilter
    ); // StringSource

}
catch(const CryptoPP::Exception& e)
{
    cerr << e.what() << endl;
    exit(1);
}    
    // save current time just after finishing the encryption loop
finishTime = clock();

and my test results are here:

enter code heredouble executionTimeInSec = double( finishTime - startTime ) / CLOCK_TICKS_PER_SECOND;    

std::cout << "Encryption loop execution time: " << executionTimeInSec * 1000.0 << " microseconds." << std::endl;

std::cout << "Plain text size: " << plain.size() << " bytes." << std::endl;

double data_rate_MiBps = ((double)plain.size() / 1048576) / ((double)executionTimeInSec) ;

std::cout << "Encryption/decryption loop execution time MB/S: " << data_rate_MiBps << " MB/S." << std::endl; 
return 0;}

Timing unoptimized debug build. Compiled result1:

Encryption loop execution time: 0.041 microseconds.

Compiled result2:

Encryption loop execution time: 0.057 microseconds.


Solution

  • 0.041 microseconds is too short a timeframe to test in. To get a reliable measure you need to perform many iteration of your test and then divide the total time by the number of iterations you did.

    When measuring in so short time frames many factors might mess up your timings:

    1. The resolution of the clock on your system might not be high enough giving relative big jumps in your measures.
    2. Your timing only measure elapse time, not the actual time spent running on a CPU. The impact of the OS assigning your CPU to something else in one test as opposed to another introduces big swings in the measure. When doing many iterations you smooth this random impact out on many iterations and thus removes the impact of chance.
    3. Etc.