I am almost sure that this cannot be done, but I will ask anyway. I have to use a C based library, which defines a numeric vector as array of floats, and lots of arithmetic functions to use them. I want to create a trivial class that can be easily casted to that type, with the addition of useful operators. Let's see a MWE:
#include <iostream>
using vector_type = float[3];
class NewType
{
public:
float& operator [](std::size_t i) { return v[i]; }
const float& operator [](std::size_t i) const { return v[i]; }
operator vector_type& () { return v; }
vector_type* operator & () { return &v; }
private:
vector_type v;
};
int main()
{
NewType t;
t[0] = 0.f; t[1] = 1.f; t[2] = 2.f;
const vector_type& v = t;
std::cout << "v(" << v[0] << "," << v[1] << "," << v[2] << ")" << std::endl;
return 0;
}
This works flawlessly. The problem arises when we start using arrays. Let's write a new main function:
int main()
{
constexpr std::size_t size = 10;
vector_type v1[size]; // OK
NewType v2[size]; // OK
vector_type* v3 = v2; // No way, NewType* cannot be
// converted to float (*)[3]
vector_type* v4 =
reinterpret_cast<vector_type*>(v2); // OK
return 0;
}
The reinterpret_cast
works, but it makes the code less readable and the conversion between vector_type
and NewType
not transparent.
As far as I know, it is not possible, according to C++11 and C++14 standards, to make the NewType
class implicitly castable when using arrays. Is it completely true? Are there any sort of caveats that allow this convertion?
P.s.: Please, do not start commenting about the risks of using reinterpret_cast and so on. I am aware of the risks, I know that the compiler could add some padding, and I already have some static_assert
checks to avoid memory problems.
[Edit] I want to make the problem easier to understand. Let's make a different example:
struct original_vector
{
float x;
float y;
float z;
};
class NewType : public original_vector
{
public:
/* Useful functions here */
};
If this would be my case, everything would be easy! The type used in the C library would be original_vector
, and I could create a derived class and I could add any sort of method.
The problem is that, in my real case, the original_vector
is not a class/struct, but a raw array! And obviously, I cannot inherit it. Maybe now it is more clear the reason I am asking this question. ;)
I think that it is not the best solution, but it is the best I can think using C++14 capabilities. Maybe, if runtime-sized member allocation is introduced in future standards (the proposal for C++14 has been rejected), a better solution will be possible. But for now...
#include <iostream>
#include <memory>
#include <cassert>
using vector_type = float[3];
class NewType
{
public:
float& operator [](std::size_t i) { return v[i]; }
const float& operator [](std::size_t i) const { return v[i]; }
operator vector_type& () { return v; }
vector_type* operator & () { return &v; }
private:
vector_type v;
};
class NewTypeArray
{
public:
NewTypeArray() : size(0), newType(nullptr) {}
NewTypeArray(std::size_t size) : size(size) { assert(size > 0); newType = new NewType[size]; }
~NewTypeArray() { if(size > 0) delete[] newType; }
NewType& operator[](std::size_t i) { return newType[i]; }
operator vector_type* () { return static_cast<vector_type*>(&newType[0]); }
private:
std::size_t size;
NewType* newType;
};
static_assert(sizeof(NewType) == sizeof(vector_type) and sizeof(NewType[7]) == sizeof(vector_type[7]),
"NewType and vector_type have different memory layouts");
Obviously, the NewTypeArray
could be modified, implementing vector-oriented methods, move constructor and assignment (like in my real-case code).
Instances of NewTypeArray
could be directly passed to functions which takes vector_type*
as argument and, thanks to the static_assert
, there should not be any sort of problems with memory management.