I am trying to train a SOM using Encog3. There are two examples of doing this in encog-examples - one is training an XOR SOM where all the data is used for training until convergence, and the Color SOM where one out of 15 colors is sampled randomly at each of 1000 iterations. My question is if the second approach was so the example completed with adequate results in a short enough time or if there was a reason for this. If I were to train with all 15 input colors at each iteration, would it have created better results?
That depends on what results you are looking for. This is a very common example for SOM's. Here is a more lengthy description (not written by me) of exactly the same thing.
http://www.ai-junkie.com/ann/som/som2.html
The purpose of the example is to show how patterns emerge from the training of an SOM. Most of the color examples I've seen for SOM do it this way (online training). It causes the output to be more varied/random.
SOM's can be trained in batch. It is not a difficult modification to the example. If you are looking for quick convergence, then yes, you get better results. However, the example quickly converges to close to a single color, and very quickly. You do not get the animated convergence to several colors that most of these examples look for.