I am trying to use the features from Deep Neural Networks (DNN) to train the Lease Squared SVM. The standard procedure to solve the LS-SVM is to inverse the kernel matrix. However, the kernel matrix from the feature of DNN is not full rank. Does anyone know how to transfer this sparse matrix to a full rank matrix without losing much information? Is PCA a good candidate solution to reduce the input dimension and make it dense?
You seem to be confusing things. Kernel matrix is a matrix of pairwise dot products, these are not features. If you have your feature matrix F, which is sparse and of size N x H, then your kernel matrix (using linear kernel on top of this space) is simply:
K = F F'
which is N x N, and dense, thus there is no problem with applying any kind of SVM.