Search code examples
c++arrayscircular-buffer

Efficient circular buffer in C++ which will be passed to C-style array function parameter


I'm seeking advice about my approach to the following problem. I have a constant input of data that I need to add to my buffer, and at every iteration, I need to pass buffered data to a function that accepts C-style array through a pointer.

I'm worrying about efficiency so I pondered how could I store and manage data in some sort of circular buffer, but also get it as a sequential raw data to pass it to the said function.

My current approach can be summarized in the following example:

#include <iostream>
#include <array>
#include <algorithm>

void foo(double* arr, int size)
{
  for (uint k = 0; k < size; k++)
    std::cout << arr[k] << ", ";

  std::cout << std::endl;
}

int main()
{
  const int size = 20;
  std::array<double, size> buffer{};

  for (double data = 0.0; data < 50.0; data += 1.0)
  {
      std::move(std::next(std::begin(buffer)), std::end(buffer), std::begin(buffer));
      buffer.back() = data;

      foo(buffer.data(), size);
  }
}

In real use-case, the buffer also needs to be padded to a "const" size of data at the beginning (I use quotes here because size may, or may not be known at compile-time, but once it is known, it will never change).

I store data in the std::array (or in std::vector if the size will not be known at compile-time) since the data is sequential in memory. When I need to insert new data, I use forward std::move to shift everything, and then I manually replace the last item. Finally, I just pass std::array::data() and its size to the function.

While at first glance this should work efficiently, reason tells me that because data is sequentially stored, the whole buffer will still be copied with std::move, and each insert will be O(n)

Real buffer size will probably be only in hundreds and data is arriving at 100Hz max, but the problem is I need the result of the called function as soon as possible so I don't want to lose time on a buffer management (even if we are talking few, or even less than ms). I have many questions about this, but their short-list is following:

  • Is my approach too naive?
  • Is my reasoning about O(n) correct?
  • Are there any other pitfalls with this approach?
  • Do you have suggestions for some other approach that I should look into?

Solution

  • Thank you for the answer Werner. When I run this solution on Repl.it, I get:

    it took an average of 21us and a max of 57382us
    

    For comparison, my original idea with the same buffer size has the following result:

    it took an average of 19us and a max of 54129us
    

    This means that my initial approach indeed was naive :)

    In the meantime, while waiting for the answer, I've come up with following solution:

    #include <iostream>
    #include <array>
    #include <algorithm>
    #include <chrono>
    
    void foo(double* arr, int size)
    {
      for (uint k = 0; k < size; k++)
        std::cout << arr[k] << ", ";
    
      std::cout << std::endl;
    }
    
    int main()
    {
      const int buffer_size = 20;
      std::array<double, buffer_size*2> buffer{};
      int buffer_idx = buffer_size;
    
      for (double data = 0.0; data < 100.0; data += 1.0)
      {
        buffer.at(buffer_idx - buffer_size) = data;
        buffer.at(buffer_idx++) = data;
    
        foo(buffer.data() + buffer_idx - buffer_size, buffer_size);
    
        buffer_idx -= buffer_size * (buffer_idx == buffer_size * 2);
      }
    }
    

    Since the size of the buffer is not a problem, I allocate twice the memory needed and insert data at two places, offset by the buffer size. When I reach the end, I just go back like the typewriter. The idea is that I fake the circular buffer by storing one more copy of data so it can read data as if it crossed full circle.

    For buffer size of 50000, this gives me the following result which exactly what I wanted:

    it took an average of 0us and a max of 23us