I am trying to run a function that is similar to Google's PageRank algorithm (for non-commercial purposes, of course). Here is the Python code; note that a[0]
is the only thing that matters here, and a[0]
contains an n x n
matrix such as [[0,1,1],[1,0,1],[1,1,0]]
. Also, you can find where I got this code from on Wikipedia:
def GetNodeRanks(a): # graph, names, size
numIterations = 10
adjacencyMatrix = copy.deepcopy(a[0])
b = [1]*len(adjacencyMatrix)
tmp = [0]*len(adjacencyMatrix)
for i in range(numIterations):
for j in range(len(adjacencyMatrix)):
tmp[j] = 0
for k in range(len(adjacencyMatrix)):
tmp[j] = tmp[j] + adjacencyMatrix[j][k] * b[k]
norm_sq = 0
for j in range(len(adjacencyMatrix)):
norm_sq = norm_sq + tmp[j]*tmp[j]
norm = math.sqrt(norm_sq)
for j in range(len(b)):
b[j] = tmp[j] / norm
print b
return b
When I run this implementation (on a matrix much larger than a 3 x 3
matrix, n.b.), it does not yield enough precision to calculate the ranks in a way that allows me to compare them usefully. So I tried this instead:
from decimal import *
getcontext().prec = 5
def GetNodeRanks(a): # graph, names, size
numIterations = 10
adjacencyMatrix = copy.deepcopy(a[0])
b = [Decimal(1)]*len(adjacencyMatrix)
tmp = [Decimal(0)]*len(adjacencyMatrix)
for i in range(numIterations):
for j in range(len(adjacencyMatrix)):
tmp[j] = Decimal(0)
for k in range(len(adjacencyMatrix)):
tmp[j] = Decimal(tmp[j] + adjacencyMatrix[j][k] * b[k])
norm_sq = Decimal(0)
for j in range(len(adjacencyMatrix)):
norm_sq = Decimal(norm_sq + tmp[j]*tmp[j])
norm = Decimal(norm_sq).sqrt
for j in range(len(b)):
b[j] = Decimal(tmp[j] / norm)
print b
return b
Even at this unhelpfully low precision, the code was extremely slow and never finished running in the time I sat waiting for it to run. Previously, the code was quick but insufficiently precise.
Is there a sensible/easy way to make the code run quickly and precisely at the same time?
Few tips for speeding up:
Walking through your code:
from decimal import *
getcontext().prec = 5
def GetNodeRanks(a): # graph, names, size
# opt: pass in directly a[0], you do not use the rest
numIterations = 10
adjacencyMatrix = copy.deepcopy(a[0])
#opt: why copy.deepcopy? You do not modify adjacencyMatric
b = [Decimal(1)]*len(adjacencyMatrix)
# opt: You often call Decimal(1) and Decimal(0), it takes some time
# do it only once like
# dec_zero = Decimal(0)
# dec_one = Decimal(1)
# prepare also other, repeatedly used data structures
# len_adjacencyMatrix = len(adjacencyMatrix)
# adjacencyMatrix_range = range(len_ajdacencyMatrix)
# Replace code with pre-calculated variables yourself
tmp = [Decimal(0)]*len(adjacencyMatrix)
for i in range(numIterations):
for j in range(len(adjacencyMatrix)):
tmp[j] = Decimal(0)
for k in range(len(adjacencyMatrix)):
tmp[j] = Decimal(tmp[j] + adjacencyMatrix[j][k] * b[k])
norm_sq = Decimal(0)
for j in range(len(adjacencyMatrix)):
norm_sq = Decimal(norm_sq + tmp[j]*tmp[j])
norm = Decimal(norm_sq).sqrt #is this correct? I woudl expect .sqrt()
for j in range(len(b)):
b[j] = Decimal(tmp[j] / norm)
print b
return b
Now few samples of how can be list processing optimized in Python.
Using sum
, change:
norm_sq = Decimal(0)
for j in range(len(adjacencyMatrix)):
norm_sq = Decimal(norm_sq + tmp[j]*tmp[j])
to:
norm_sq = sum(val*val for val in tmp)
A bit of list comprehension:
Change:
for j in range(len(b)):
b[j] = Decimal(tmp[j] / norm)
change to:
b = [Decimal(tmp_itm / norm) for tmp_itm in tmp]
If you get this coding style, you will be able optimizing the initial loops too and will probably find, that some of pre-calculated variables are becoming obsolete.