In my algorithms class, we have to turn in algorithms for removing duplicates of a list of integers and shoot for the lowest possible complexity. In my algorithm, when I see a duplicate integer, I shift every element after that integer down one index in order to delete the duplicated element using a for-loop; like so:
for(int i=dup_index; i<arr_size-1; i++)
arr[i] = arr[i+1];
Would it be more efficient for my algorithm to use memmove? Furthermore if it's my job to design the algorithm, and assuming memmove lowers the complexity of my algorithm, could using memmove be seen as 'cheating'?
I don't know about cheating, but
memmove basically does what your loop does, only more efficiently.
Besides, it's one of the most basic utility functions, so I don't see why you should not use it.
As for complexity, the order of the algorithm will not change, it will simply be faster.
memmove is implemented in assembly and tries to make full use of the alignment to copy word per word instead of byte per byte.
well okay, there might be cases where the manual copy might be a couple of instructions shorter than the call to
memmove, but if you start moving data around in memory, you're doing an inherently costly operation, so optimizing a couple of CPU cycles away will not make any difference in the big picture.
If your design involves in-place moves for performance-critical data, you would be better off changing the underlying data structure for something that avoids copies altogether (list, tree, hash table, whatever).