Search code examples
c++loopsmemset

Why are the loops for memset() 1M times and 10M times cost the same time?


Here is my code:

#include <iostream>
#include <sys/time.h>
#include <string.h>

using namespace std;

int main()
{
    char* a = (char*)malloc(1024);
    int times = 10000000;
    struct timeval begin, end;
    gettimeofday(&begin, NULL);
    for(int i=0; i<times; i++){
        memset(a, 1, 1024);
    }
    gettimeofday(&end, NULL);
    cout << end.tv_sec - begin.tv_sec << "." << end.tv_usec - begin.tv_usec << endl;
    return 0;
}

When I set times to 1M, the output is about 0.13 second, however when I set times to 10M, the output is still about 0.13 second. What causes such circumstance? Is it caused by the optimisation of Linux or the compiler?


Solution

  • UPDATE: optimisation disabled

    I think you need to use more precise chrono instead of time.h and disable compiler optimisations:

    #include <iostream>
    #include <string.h>
    #include <chrono>
    
    #ifdef __GNUC__
        #ifdef __clang__
            static void test() __attribute__ ((optnone)) {
        #else
            static void __attribute__((optimize("O0"))) test() {
        #endif
    #elif _MSC_VER
        #pragma optimize( "", off )
            static void test() {
    #else
        #warning Unknow compiler!
            static void test() {
    #endif
    
        char* a = (char*) malloc(1024);
    
        auto start = std::chrono::steady_clock::now();
        for(uint64_t i = 0; i < 1000000; i++){
            memset(a, 1, 1024);
        }
        std::cout<<"Finished in "<<std::chrono::duration<double, std::milli>(std::chrono::steady_clock::now() - start).count()<<std::endl;
    }
    
    #ifdef _MSC_VER
        #pragma optimize("", on)
    #endif
    
    int main() {
        test();
    
        return 0;
    }
    

    10M: Finished in 259.851 1M: Finished in 26.3928