Search code examples
machine-learningneural-networkneatbiological-neural-network

Feedforward Algorithm in NEAT (Neural Evolution of Augmenting Topologies)


I don’t understand how the NEAT algorithm takes inputs and then outputs numbers based on the connection genes, I am familiar with using matrixes in fixed topology neural networks to feedforward inputs, however as each node in NEAT has its own number of connections and isn’t necessarily connected to every other node, I don’t understand, and after much searching I can’t find an answer on how NEAT produces outputs based on the inputs.

Could someone explain how it works?


Solution

  • That was also a question I struggled while implementing my own version of the algorithm.

    You can find the answer in the NEAT Users Page: https://www.cs.ucf.edu/~kstanley/neat.html where the author says:

    How are networks with arbitrary topologies activated?

    The activation function, bool Network::activate(), gives the specifics. The implementation is of course considerably different than for a simple layered feedforward network. Each node adds up the activation from all incoming nodes from the previous timestep. (The function also handles a special "time delayed" connection, but that is not used by the current version of NEAT in any experiments that we have published.) Another way to understand it is to realize that activation does not travel all the way from the input layer to the output layer in a single timestep. In a single timestep, activation only travels from one neuron to the next. So it takes several timesteps for activation to get from the inputs to the outputs. If you think about it, this is the way it works in a real brain, where it takes time for a signal hitting your eyes to get to the cortex because it travels over several neural connections.

    So, if one of the evolved networks is not feedforward, the outputs of the network will change in different timesteps and this is particularly useful in continuous control problems, where the environment is not static, but also problematic in classification problems. The author also answers:

    How do I ensure that a network stabilizes before taking its output(s) for a classification problem?

    The cheap and dirty way to do this is just to activate n times in a row where n>1, and hope there are not too many loops or long pathways of hidden nodes.

    The proper (and quite nice) way to do it is to check every hidden node and output node from one timestep to the next, and see if nothing has changed, or at least not changed within some delta. Once this criterion is met, the output must be stable.

    Note that output may not always stabilize in some cases. Also, for continuous control problems, do not check for stabilization as the network never "settles" but rather continuously reacts to a changing environment. Generally, stabilization is used in classification problems, or in board games.