I am learning about perceptrons and neural networks. I know that perceptrons can only classify data that is linearly separable. Does this mean that they can only classify data in a two-dimensional space? Any insights are appreciated.
No, it can classify linearly separable data in any number of dimensions.
If (x_1, x_2, …, x_n) is the 'position' of a datapoint in n-dimensional space, then it is classified according to whether x_1 * w_1 + … + x_n + w_n + b > 0, where (w_1, … w_n) is the weight vector and b is the bias.
However, the perceptron only considers a 1-dimensional input-space - the line defined by (w_1, …, w_n) and b which is a line in R^n, and it 'ignores' all info perpendicular to this line and only considers the projection of data onto this line.
So no, it can classify n-dimensional data, but yes, it 'is' a 1-dimensional classifier.
(And of course it can be expanded to a multiclass classifier, but that changes the question.)
I hope this helps.