I'm having trouble understanding what the difference between these two code snippets is:
// out is of type char* of size N*D
// N, D are of type int
for (int i=0; i!=N; i++){
if (i % 1000 == 0){
std::cout << "i=" << i << std::endl;
}
for (int j=0; j!=D; j++) {
out[i*D + j] = 5;
}
}
This code runs fine, even for very big data sets (N=100000, D=30000). From what I understand about pointer arithmetic, this should give the same result:
for (int i=0; i!=N; i++){
if (i % 1000 == 0){
std::cout << "i=" << i << std::endl;
}
char* out2 = &out[i*D];
for (int j=0; j!=D; j++) {
out2[j] = 5;
}
}
However, the latter does not work (it freezes at index 143886 - I think it segfaults, but I'm not 100% sure as I'm not used to developing on windows) for a very big data set and I'm afraid I'm missing something obvious about how pointer arithmetic works. Could it be related to advancing char*?
EDIT: We have now established that the problem was an overflow of the index (i.e. (i*D + j) >= 2^32), so using uint64_t instead of int32_t fixed the problem. What's still unclear to me is why the first above case would run through, while the other one segfaults.
N * D
is 3e9; that doesn't fit in a 32 bit int
.