I have an dilemma, i'm using one hot encoding and i need to do feature selection ( for categorical and numerical features), i have some features that aren't really important, but i wan't to use some algorithm to do it, not manually. My question is two fold -
If you have many features, and potentially many of these are irrelevant to the model, feature selection will enable you to discard them and limit your dataset to the most relevant features.
Bellow are a few key aspects to consider in these cases:
This is quite usually a crucial step when you're working with large datasets. Blindly one-hot encoding all categorical features for instance, might lead to a massive dataframe, which perhaps cannot even be stored into memory, let alone be used for a ML model. In such cases, you'll probably need to reduce the amount of features to encode or look into other categorical encoders such as bayesian encoders (see last section of the anwer).
One negative aspect of not doing feature selection, very eloquently put here, could be that we have many highly correlated features, and when analising the feature importances the importance that you get for these may not be indicative of their actual relevance.
Answering the second part of your question, if the features you have could be relevant and you've done some feature engineering, then you could encode them and if you end up with many features then you could perform feature selection and reduce the dimenionality of the resulting dataset. There are many feature selection techniques. You can find a list of the ones available in scikit-learn in Feature selection.
Based on some of the comments...
Firstly, since you mention using the LabelEncoder in the comments, bare in mind that this encoder is only intended for the label, not for the features! See LabelEncoder for categorical features? .
For the categorical features, if they have a high cardinality, you should better look into bayesian encoders. See this related question: How to encode a categorical feature with high cardinality?