There is a standard undersampling technic:
# Randomly UNDERSAMPLE the majority class
rus = RandomUnderSampler(random_state=42)
X_train_rus, y_train_rus= rus.fit_resample(X_train, y_train)
rus_model = rf.fit(X_train_rus, y_train_rus)
rus_prediction = rus_model.predict(X_test)
# Check the model performance
print(classification_report(y_test, rus_prediction))
It would make sense to make undersampling NON-random and focus on removing records only from the majority class using a custom removal criteria. For example, removing data points where probabilities of being a majority class are between 0.51 and 0.75. The idea is to improve predictability of the minority class with sacrificing too much of the predictability of the majority class.
We can call it an improvement of the near-miss method by replacing randomness from the method. What do you think?