Suppose i have a 3d array declared globably (in data segment) that I want to memset 1d of it to 0.
int multi_dimension_array[x][y][z];
I can memset the whole thing with the line:
memset(multi_dimension_array, 0, sizeof(multi_dimension_array));
But now suppose i only want to memset the x dimension for some value (say 2), meaning the values of multi_dimension_array[2][0][0] to multi_dimension_array[2][y-1][z-1] should all be zero. I dont think theres a clever way to use memset for y or z since they're not contiguous. The following line should work:
memset(&multi_dimension_array[2][0][0], 0, sizeof(multi_dimension_array[2][0][0]) * y * z);
My "issue" is that i dont like the * y * z part of the memset param. Is there something in the array that says sizeof(multi_dimension_array[2]) == byte_size_of_type * y * z?
I want to use a property of the array that sizeof would evaluate to the correct number of bytes for the "x" dimension in this example. I dont want to use * y *z in the event that someone changes the size in the declaration and they do not change this memset, plus i dont like how it looks.
memset(&multi_dimension_array[2], 0, sizeof multi_dimension_array[2]);
This requires that multi_dimension_array[i]
be an array of arrays of arrays, not a pointer. It works because, when sizeof
is applied to an array, it returns the size of the array. The array is not automatically converted to a pointer as in most expressions.
Of course, it works only for the first dimension (or first several dimensions, if you do more than one). E.g., you can use one memset with array[i]
or array[i][j]
but not with the middle dimensions such as array[???][j]
.