Search code examples
c++stlbuffer

Are there drawbacks of using C++ containers and smart pointers instead of C style arrays as buffers?


I have often seen that when a C++ function wants to return a buffer of data, the caller have to provide a pointer to the first element of the buffer and the count as function arguments. Sometimes this makes sense as the return value of the function is used to communicate the result of the function. But can't we just create the collection of data within the function using STL containers like std::vector or std::array and return a pointer to that data?

If we consider BcryptGenRandom() method in CNG API,

NTSTATUS BCryptGenRandom(
  [in, out] BCRYPT_ALG_HANDLE hAlgorithm,
  [in, out] PUCHAR            pbBuffer,
  [in]      ULONG             cbBuffer,
  [in]      ULONG             dwFlags
);

I can use this function with either "Modern C++" features:

//Method 1
#include <iostream>
#include <cstdint>
#include <memory>
#include <array>
#include <exception>

#include <Windows.h>
#include <bcrypt.h>

#define NT_SUCCESS(Status)  (((NTSTATUS)(Status)) >= 0)

template <ULONG N>
std::shared_ptr<std::array<BYTE, N>> GenRandom()
{
    std::shared_ptr<std::array<BYTE, N>> pBuffer{ new std::array<BYTE, N> };
    if (!NT_SUCCESS(BCryptGenRandom(NULL, pBuffer->data(), pBuffer->size(), BCRYPT_USE_SYSTEM_PREFERRED_RNG)))
    {
        throw std::exception();
    }
    
    return pBuffer;
}

int main() {
    const int size{ 64 };
    std::shared_ptr<std::array<BYTE, size>> arr{};
    try
    {
        arr = GenRandom<size>();
    }
    catch (const std::exception& ex)
    {
        return 1;
    }

    for (int i{ 0 }; i < arr->size(); i++)
    {
        std::cout << std::hex << static_cast<std::uint16_t>((*arr)[i]);
    }
    return 0;
}

or in old fashioned way which appears in most documentations and tutorials:

//Method 2
int main() {
    const int size{ 64 };
    BYTE buffer[size]{};
    if (!NT_SUCCESS(BCryptGenRandom(NULL, buffer, size, BCRYPT_USE_SYSTEM_PREFERRED_RNG)))
    {
        return 1;
    }

    for (int i{ 0 }; i < size; i++)
    {
        std::cout << std::hex << static_cast<std::int16_t>(buffer[i]);
    }
    return 0;
}

Even though it seems like second method is simpler and more straightforward, I always feel that the first method offers better encapsulation and error handling mechanism while smart pointers taking care of freeing the dynamically allocated memory (As an example: when I'm to using this GenRandom() method to generate arbitrary size random values in different places in the codebase).

So, does the first method has any significant drawbacks when it comes to performance, compatibility or something I have missed when compared to the second method when working with the buffers?


Solution

  • The purpose of smart pointers is to replace manual memory management (new/delete, or malloc/free). If it's a viable option to write

    BYTE buffer[64] {};
    

    ... then using smart pointers (especially std::shared_ptr) is completely pointless and does nothing more than waste performance. The compiler already manages the memory for you automatically, and wrapping it in a smart pointer wouldn't make this code any better.

    However, if you see a modern alternative to C-style arrays, there is std::array

    std::array<BYTE, size> buffer{};
    

    You can also keep your C++ wrapper code and return std::array instead of std::shared_ptr<std::array>. If you wanted a std::shared_ptr of an array, the canonical way would be to use the specialization anyway: std::shared_ptr<BYTE[]>.


    Note: If buffer is particularly large and you're afraid of stack overflow, you can also make it static or thread_local. For 64 bytes, this shouldn't be a problem.

    See also: What are the advantages of using std::array over C-style arrays?