Having read about random under sampling, random over sampling and SMOTE, I am trying to understand what methodology is used by the default implement in SKlearn package for Logistic Regression or Random Forest. I have checked documentation here
The balanced mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y))
I am not able to understand of it under samples majority class or over samples minority class to create the balanced set
They are very different.
SMOTE will alter the data and make the dataset balanced by oversampling (means it will generate similar looking data as in minority class to increase its samples. So the new dataset is created.
In LR, it doesnt make the dataset balanced. It doesn't create new data. It just penalizes the mis-classification of minority class more. So the model will be careful enough to take care of that class. Thats why its called 'class_weight'
.