Is there any way to optimize this code?
def find_duplicates(lst):
duplicates = []
for i in range(len(lst)):
for j in range(i + 1, len(lst)):
if lst[i] == lst[j] and lst[i] not in duplicates:
duplicates.append(lst[i])
return duplicates
The code above is already an improvement of another code, but I think it can be improved
You can utilise the collection.Counter class then iterate over its contents to achieve your objective.
The following code identifies items that appear more than once in the list - i.e., not necessarily duplicates
from collections import Counter
def find_duplicates(lst):
return [k for k, v in Counter(lst).items() if v > 1]
print(find_duplicates([1,2,2,3,4,2,3]))
Output:
[2, 3]