I am fairly new to python and I have a task which tells me to compare both algorithm time expended and space used in memory.
I have coded both algorithms and ran them both. I was able to measure the time used, but wasnt able to look for ways to know how much space was used. I am also not sure if the question is asking me to calculate it based on general BFS and DFS or the code I have coded.
Comparison of the time expended by the algorithms.
Comparison of the space used in memory at a time by the algorithms
To get the time I used start_time = time.time()
and end = time.time()
BFS algorithm
0.0060007572174072266s
DFS algorithm
0.005002260208129883s
How would I calculate the space used in memory assuming that it is based on my code. I might be just confused but the wording of the question makes me feel like I need to measure it when running both algorithms to compare the performance.
Code:
BFS :
def knapsack_bfs(items, max_weight):
queue = deque()
root = Node(-1, 0, 0, [])
queue.append(root)
max_benefit = 0
best_combination = []
while queue:
current = queue.popleft()
if current.level == len(items) - 1:
if current.benefit > max_benefit:
max_benefit = current.benefit
best_combination = current.items
else:
next_level = current.level + 1
next_item = items[next_level]
include_benefit = current.benefit + next_item.benefit
include_weight = current.weight + next_item.weight
if include_weight <= max_weight:
include_node = Node(next_level, include_benefit,
include_weight, current.items + [next_item.id])
if include_benefit > max_benefit:
max_benefit = include_benefit
best_combination = include_node.items
queue.append(include_node)
exclude_node = Node(next_level, current.benefit,
current.weight, current.items)
queue.append(exclude_node)
return max_benefit, best_combination
DFS:
def knapsack_dfs(items, max_weight):
queue = []
root = Node(-1, 0, 0, [])
queue.append(root)
max_benefit = 0
best_combination = []
while queue:
current = queue.pop()
if current.level == len(items) - 1:
if current.benefit > max_benefit:
max_benefit = current.benefit
best_combination = current.items
else:
next_level = current.level + 1
next_item = items[next_level]
include_benefit = current.benefit + next_item.benefit
include_weight = current.weight + next_item.weight
if include_weight <= max_weight:
include_node = Node(next_level, include_benefit,
include_weight, current.items + [next_item.id])
if include_benefit > max_benefit:
max_benefit = include_benefit
best_combination = include_node.items
queue.append(include_node)
exclude_node = Node(next_level, current.benefit,
current.weight, current.items)
queue.append(exclude_node)
return max_benefit, best_combination
Edit:
Results based on the answer below:
program.py:42: size=4432 B (+840 B), count=79 (+15), average=56 B
program.py:116: size=0 B (-768 B), count=0 (-1)
program.py:79: size=0 B (-744 B), count=0 (-13)
program.py.py:85: size=0 B (-72 B), count=0 (-1)
program.py:57: size=0 B (-56 B), count=0 (-1)
program.py:56: size=0 B (-56 B), count=0 (-1)
program.py:74: size=0 B (-32 B), count=0 (-1)
program.py:37: size=32 B (+0 B), count=1 (+0), average=32 B
You can try doing what another answer suggested: https://stackoverflow.com/a/45679009/670693
While I couldn't run your code to try things out, missing the Node
definition, you can try something like:
import tracemalloc
tracemalloc.start()
knapsack_dfs(...)
dfs_snapshot = tracemalloc.take_snapshot()
knapsack_bfs(...)
bfs_snapshot = tracemalloc.take_snapshot()
tracemalloc.stop()
bfs_snapshot = bfs_snapshot.filter_traces((tracemalloc.Filter(False, '*tracemalloc.py'),))
dfs_snapshot = dfs_snapshot.filter_traces((tracemalloc.Filter(False, '*tracemalloc.py'),))
for statdiff in bfs_snapshot.compare_to(dfs_snapshot, 'lineno'):
print(statdiff)