I am currently following a book from Springer called "Guide to scientific computing in C++", and one of its exercises regarding pointers says as follows:
"Write code that allocates memory dynamically to two vectors of doubles of length 3, assigns values to each of the entries, and then de-allocates the memory. Extend this code so that it calculates the scalar product of these vectors and prints it to screen before the memory is de-allocated. Put the allocation of memory, calculation and de-allocation of memory inside a for loop that runs 1,000,000,000 times: if the memory is not de-allocated properly your code will use all available resources and your computer may struggle."
My attempt at this is:
for (long int j = 0; j < 1000000000; j++) {
// Allocate memory for the variables
int length = 3;
double *pVector1;
double *pVector2;
double *scalarProduct;
pVector1 = new double[length];
pVector2 = new double[length];
scalarProduct = new double[length];
for (i = 0; i < length; i++) { // loop to give values to the variables
pVector1[i] = (double) i + 1;
pVector2[i] = pVector1[i] - 1;
scalarProduct[i] = pVector1[i] * pVector2[i];
std::cout << scalarProduct[i] << " " << std::flush; // print scalar product
}
std::cout << std::endl;
// deallocate memory
delete[] pVector1;
delete[] pVector2;
delete[] scalarProduct;
}
My problem is that this code runs, but is inefficient. It seems that the de-allocation of the memory should be much faster since it runs for over a minute before terminating it. I am assuming that I am misusing the de-allocation, but haven't found a proper way to fix it.
Your code does exactly what it is supposed to, run a long time without crashing your computer due to out_of_memory. The book might be a bit dated as it assumes you can not allocate more than 72.000.000.000 bytes before crashing. You can test it be removing the deletes hence leaking the memory.