Are Logistic and linear regressions special cases of a neural network?
A neural network can be configured to perform logistic regression or linear regression.
In either case, the neural network has exactly one trainable layer (the output layer), and that layer has exactly one neuron (the operator performing the W * x + b
affine calculation and the activation). They differ in their activation function.
For logistic regression, there is a sigmoid activation function at the output layer, producing a floating point number in the range [0.0, 1.0]. You can make a binary decision by applying a threshold of 0.5 to the value.
For linear regression, there is typically no activation function at the output layer, so you get an unbounded floating point number.
In general, you can add hidden layers into your neural network (to add nonlinearity and more learning capacity) and still perform binary classification and regression so long as the output layer activation is configured as written above.