I was comparing the runtime of different approaches for a problem which required returning indices (as a list) of two numbers that adds up to a target number, and am baffled by the results obtained. Approach -1 had an approximated runtime of 344 ms while approach - 2 resulted with a runtime of 682 ms (when submitted at LeetCode). I was under the pretense that the nested loop was far inferior to a singly ran loop. Was this observation a result of list with two numbers lying near to the starting point of the list? Even if that was the case, how come nested loop is efficient?
# Approach - 1
for index1, num1 in enumerate(nums):
num2 = target - num1
if num2 in nums[index1 + 1:]:
if nums.index(num2) != index1:
return [index1, nums.index(num2)]
else:
return [index1, nums[index1 + 1:].index(num2) + index1 + 1]
# Approach - 2
for i in range(len(nums)):
var = target - nums[i]
a = nums.pop(i)
try:
index2 = nums.index(var) + 1
except:
nums.insert(i, a)
continue
return [i, index2]
pop, insert, slicing, and searching (whether with in
or index
) all take linear time on average, so both approaches have a "nested loop". But searching is by far the slowest, as that involves comparing values, which is relatively expensive. And approach 1 on average searches in n/2 values (where n=len(nums)) while approach 2 always searches in n-1 values. So it's no wonder if approach 2 takes about twice as much time.
You can easily make approach 2 also search in only n/2 values, by using nums.index(var, i) + 1
instead of nums.index(var) + 1
. And then it might be faster than approach 1, since pop+insert are faster than slicing. (Of course you don't even need to pop/insert, could simply use nums.index(var, i + 1)
.)
At LeetCode I submitted your approach 1 a few times, took ~350 ms each time. Approach 2 took ~700 ms each time. And approach 2 with the added , i
in the index
call took ~395 ms each time. So it's still a bit slower, which I guess is due to catching exceptions being expensive.