Search code examples
pythontensorflowkerasnested-loopsloss-function

How to handle nested loops with tensorflow?


i am new to tensorflow. I am working with keras but for creating a customized loss function i am more or less forced to write a function in tensorflow. I get stuck at the point where i have to translate this following numpy for loop into tensorflow syntax.

for j in range(grid):
    for k in range(modes):
        for l in range(dim):
            for m in range(dim):
                lorentz[:,j,l,m] += 1J*osc_stre[:,l,m,k]/(energies[j]-e_j[:,k])
                if l == m == k:
                    lorentz[:,j,l,m] += 1   

Here you can see the initial shapes of the arrays:

e_j = zeros([sample_nr,modes],dtype='complex')
osc_stre = zeros([sample_nr,dim,dim,modes],dtype='complex')
lorentz = zeros([sample_nr,grid,dim,dim],dtype='compex')

energies[j] has the shape (grid)

is it possible to handle this problem with tensorflow? Can anybody give me a hint how to translate this into tensorflow syntax? I have already tried a couple of things like the tensorflow while loop but one of the big problems is, that tensorflow objects do not support item assignment.

EDIT:

i think i've come up with a solution for this simplified verison of the problem:

for j in range(grid):
    for k in range(modes):
        lorentz[j] += 1J*osc_stre[k]/(energies[j]-e_j[k])
        if k == 0:
           lorentz[j] += 1

the solution:

lorentz_list = []
    tf_one = tf.ones([1], complex64)
    tf_i = tf.cast(tf.complex(0.,1.), complex64)
    energies_float = tf.cast(energies,float32)
    energies_complex = tf.complex(energies_float,tf.zeros([energy_grid],float32))
    for j in range(energy_grid):
        lorentz_list.append(tf.add(tf_one,tf.reduce_sum(tf.multiply(tf_i,tf.divide(osc_stre_tot,tf.subtract(energies_complex[j],e_j))),-1)))
    lorentz = tf.stack(lorentz_list)

Solution

  • Assuming these:

    • lorentz.shape == (batch, grid, dim, dim) and was zero before the loop.
    • osc_stre.shape == (batch, dim, dim, modes)
    • energies.shape == (grid,)
    • e_j.shape == (batch, modes)

    Then:

    osc_stre = K.reshape(osc_stre, (-1, 1, dim, dim, modes))
    energies = K.reshape(energies, (1, grid, 1, 1, 1))   
    e_j = K.reshape(e_j, (-1, 1, 1, 1, modes))
    
    lorentz = 1J*osc_stre/(energies-e_j) 
    
    identity = np.zeros((1, 1, dim, dim, modes))
    for d in range(min(modes,dim)):
        identity[0,0,d,d,d] = 1
    identity = K.variable(identity, dtype = tf.complex64)
    
    lorentz += identity
    lorentz = K.sum(lorentz, axis=-1)