I have a C function in my library that works with multidimensional arrays nicely:
void alx_local_maxima_u8 (ptrdiff_t rows, ptrdiff_t cols,
const uint8_t arr_in[static restrict rows][static cols],
bool arr_out[static restrict rows][static cols])
__attribute__((nonnull));
And I have a unsigned char *
that I receive from a class
defined in openCV. That pointer represents a bidimensional data, but it isn't, and I have to use it with pointer arithmetics (unsigned char *img_pix = img->data + i*img->step + j;
), which I don't especially like.
I create an array of bool
of the same size of the image (this is a real array, so I can use array notation) to store the results of the function.
I could write an almost exact copy of alx_local_maxima_u8()
that uses just a pointer and pointer arithmetics, but I'd like to be able to re-use it if I can.
Is it safe to write a prototype that uses a void *
in this way just to fool C++?:
extern "C"
{
[[gnu::nonnull]]
void alx_local_maxima_u8 (ptrdiff_t rows, ptrdiff_t cols,
const void *arr_in,
void *arr_out);
}
In theory void *
can hold any pointer which is what C will receive, and C will not access any data that doesn't belong to those pointers, so the only problems I see are aliasing a unsigned char *
as a uint8_t *[]
, and passing a void *
where a uint8_t *[]
is expected, which may cause all kind of linker errors. Also, I don't know if C bool
and C++ bool
will translate into the same thing in memory (I hope so).
Maybe I should write a wrapper in C which receives void *
and passes them to the actual function, so that I don't need to fool C++.
Performance IS a concern, but I use -flto
, so any wrappers will probably vanish in the linker.
I use GCC (-std=gnu++17
) in Linux with POSIX enabled.
The guarantee that a T[N][M] will contain NxM consecutive objects of type T impedes some otherwise-useful optimizations; the primary usefulness of that guarantee in pre-standard versions of C was that it allowed code to treat storage as a single-dimensional array in some contexts, but a multi-dimensional array in others. Unfortunately, the Standards fails to recognize any distinction between the pointers formed by the decay of an inner array versus a pointer formed by casting an outer array to the inner-element type either directly or through void*
, even though they impose limitations on the former which would impede the usefulness of the latter.
On any typical platform, in the absence of whole-program optimization, the ABI would treat a pointer to an element of a multi-dimensional array as equivalent to a pointer to an element of a single-dimensional array with the same total number of elements, making it safe to treat the latter as the former. I don't believe there is anything in the C or C++ Standard, however, that would forbid an implementation from "optimizing" something like:
// In first compilation unit
void inc_element(void*p, int r, int c, int stride)
{
int *ip = (int*)p;
ip[r*stride+c]++;
}
// In second compilation unit
int array[5][5];
void inc_element(void*p, int r, int c, int stride);
int test(int i)
{
if (array[1][0])
inc_element(array, i, 0, 5);
return array[1][0];
}
by replacing the call to inc_element
with array[0][i*5]++
, which could in turn be optimized to array[0][0]++
. I don't think the authors of the Standard intended to invite compilers to make such "optimizations", but I don't think they thought aggressive optimizers would interpret a failure to prohibit such things as an invitation.