-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Variable Importance Plot #228
Comments
part (c) asks us to use the bagging approach, but BaggingRegressor doesn't have feature_importances_ attribute. |
to solve that issue, I just took the means of the |
I tried to look into the scikit-learn document for f in range(X.shape[1]): To put the label in the results, we need to replace indices[f] with feature[f], I am still working on this issue and I appreciate your feedback. Thanks. |
Hello friends!
I know some of you had questions about 'variable importance' measures you were asked to obtain in PS6. Variable importance is basically a measure of how 'informative' a certain variable is. See http://blog.datadive.net/selecting-good-features-part-iii-random-forests/ for an intuitive explanation.
Here's how you would find it using scikitlearn:
Consider the Random Forest part of Dr. Evans's Trees.ipynb
We already have:
First, we find variable importance measures
Then, we plot them!
The text was updated successfully, but these errors were encountered: