Imagine you have two Gaussian probability distributions in two-dimensions The first is centered at (0,1) and the second at (0,-1). (For simplicity, assume they have the same variance.) Can one consider that the clusters of data points sampled from these two Gaussians are linearly separable?
Intuitively, it's clear that the boundary separating the two distributions is linear, namely the abscissa in our case. However, the formal requirement for linear separability is that the convex hulls of the clusters do not overlap. This cannot be the case with Gaussian-generated clusters since their underlying probability distributions pervade all of R^2 (albeit with negligible probabilities far away from the mean).
So, are Gaussian-generated clusters linearly separable? How can one reconcile the requirement of convex hulls with the fact that a straight line is the only conceivable "boundary"? Or, perhaps, the boundary effectively ceases to be linear once non-equal variances come in the pictures?
The Gaussian cluster instances might be separable or not. It depends on the outcome, not on the process generating it.
Linear separability can be defined a as the existence of a plane separating the two sets of points, such that one set of points is entirely on one side of the plane, and the other set of points is entirely on the other side of the plane.
Take now your specific Gaussian distributions. It is possible that they generated two linearly-seperable sets (either at the abscissa or not). However, with probability 1, if the variance is non-zero, and you let the processes generate enough points, the result will not be linearly separable.
So, again, it is a question of the outcome, not of the process.