i always reading new is slow while allocating some memory, but never find out how slow it is. so i started some research and tests.
assume, i have a buffer which is a vector (e.g. for an ethernet receiver).
so now my question is which was faster. while searching the net i didn't realy find any benchmark or something else. so i started some test's.
the 2 varints are not receivers!
copy variant
auto time = GetTickCount();
std::vector<int> vec;
std::vector<int> tmp(250);
for(int i=0; i<10000; i++) {
for(int i=0; i<1000; i++) {
vec.insert(vec.end(), tmp.begin(), tmp.end());
//std::copy(tmp.begin(), tmp.end(), std::back_inserter(vec1));
}
vec.clear();
}
std::cout << GetTickCount() - time << std::endl;
move variant
auto time = GetTickCount();
std::vector<std::vector<int> > vec;
for(int i=0; i<10000; i++) {
for(int i=0; i<1000; i++) {
std::vector<int> tmp(250);
vec.push_back(std::move(tmp));
}
vec.clear();
}
std::cout << GetTickCount() - time << std::endl;
i know allocating memory is depending on the hardware, os and memory managment, but is there a average time i could say it is better to create a new one and move it than copieng the existing one? in my tests, i found coping a vector with 250 elements needs about the same time as moving and more than 250 elements the copy variant is slower than the moving one. sure in my test the moving variant is a vector of vectors and iterating is more difficult, but this doesn't matter in (most of) my case(es). also the test is with int's and not some struct's or classes which would complicate the question.
also my test is (fast and dirty) on a windows machine. my interest is an average time (or in my condition, size) for ussual hardware and ussual systems (windows, linux, mac).
could i take my test and say copying more than 400 elemtents is slower than creating a new one?
Neil Kirk adviced me to try random size allocation, so i did. also i inserted some other stuff doing allocations (but no deletes) between the test's and the average size incressed to about 1000 elements.
i accept Mats Petersson's answer, especially after reading this (and the sublinks) and this (beside it's the only one). but i have 1 addition: you can't know if it is premature optimisation without any valuation. if an alloc would take the same time as copying 100000 elements i never would use it, and otherwise if it would take the same time as copy 10 elements i would use it alway's. but with the approximated value of 1000 elements in a network scenario where i have the network as bottleneck i could say it is premature optimisation. so i decide to use ne allocating variant, because it's more usefull in my concept.
This is a typical case of "premature optimisation". Find out if this part of your code is an important factor with regards to performance. If it's not, then don't worry about it - do whatever makes the most sense from the actual task in hand.
In general, allocating memory is reasonably fast - for anything other than basic types (int
, char
and such), the main factor is probably the time it takes to create/copy/move the object that goes into the vector, rather than the basic allocation.