I am dipping my toe into neural networks and starting with some basic perceptrons. In one video, this guy is explaining how to make a machine that can 'learn' how to distinguish two arrays. He explains the training process, but just shoves all of his inputs and weights into the sigmoid function. I did some research on the sigmoid function and was wondering why it is used in machine learning and why programmers use it to test their inputs.
This function's job is to make the numbers between 0 and 1, usually for supervised classification problems. for example in binary supervised classification problems that the labels are only two (for example in the picture below), then one data that is far from others will effect too much on the separator line.
But when we use Sigmoid function we can see that a data far from others won't effect the separator too much.
Also this function can show you a probability as well. for example if you have a new data to predict, then you can use the line and see how much it is possible that the data belongs to some label. (Take a look at the picture to understand better)
picture Link : https://pasteboard.co/IgLjcYN.jpg