Search code examples
neural-networkgenetic-algorithmevolutionary-algorithm

Neural net weight ranges and weights represented as bit vectors


I'm using an Evolutionary Algorithm to change the weights of a Neural Network and I have some questions.

a) Is it common practice, for simplicity, to keep the network weights within the range of [-1, 1]?

I wish to keep the weights represented as bit vectors and to perform mutations on the weight by randomly flipping bits in the vector. The problem I run in to then, is to keep the resulting value within the chosen range.

b) Is there a way to represent floats as bit vectors such that all 'permutations' of the bit vector falls within a certain range?

A potential solution to b) would be to define a certain number of steps, say 1024, represent the weight as a bit vector of length 10 and convert the number between [0, 1023] to a number between [-1, 1].

Thanks.


Solution

  • Is it common practice to keep the network weights within the range of [-1, 1]?

    No, however limiting the weights within some range may speed up the search process in a genetic search. You will have to experiment with the different weight ranges to find an interval that suits your problem.

    Is there a way to represent floats as bit vectors such that all 'permutations' of the bit vector falls within a certain range?

    Yes. Lets say you have a weight that takes on a value from the interval [-2, 2]. To represent float value, you decide to use (eg.) a "5 bit precision". The bit vector can therefore take on values from 0 to (2^5)-1.

    In order to calculate the float value represented by the bitvector, you calculate
    weight_min + int(bitvector)/( (2^5)-1 ) * weight_range

    A numeric example: let your bitvector be 01000 and the range be [-2, 2]. That implies that the weight range is 4 and the minimum weight is -2. Filling out the formula from the previous paragraph:
    -2 + 8 / 31 * 4 = -0.97