-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce functionality for chunking and breaking IID experiments #20
Conversation
… different train functions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks mostly good to me. I left a few minor comments which should be easy to fix.
I did not go through the plotting scripts in scripts/analysis
in detail, a lot of it can probably be deduplicated whenever we address issue #12.
random_state: int | np.random.RandomState = 0, | ||
checkpoint_path: str | pathlib.Path = pathlib.Path("./"), | ||
checkpoint_path: str | pathlib.Path = pathlib.Path("../"), | ||
checkpoint_uid: str = "", | ||
random_state_model: int | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
random_state_model
necessary in this function (see details in comment above)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment above.
Co-authored-by: fluegelk <[email protected]>
…I-Energy/special-couscous into feature/breaking_iid
This PR introduces functionality for the chunking and breaking IID experiments. In particular, the evaluation has been extended to calculate and save local and global confusion matrices in order to enable calculation of arbitrary metrics for the breaking IID experiments.
The following changes have been made:
generate_and_distribute_synthetic_dataset
without local or global imbalances equals the completely balanced dataset generated withmake_classification_dataset
when using the same random state. This ensures comparability of the strong scaling experiment series with and without chunking as the same datasets are created when passing the same random state.train_parallel_on_synthetic data
andtrain_parallel_on_balanced_synthetic_data
. This was completely missing in the former case. In addition, the argument parser was lacking some of the keyword arguments used insklearn
'smake_classification
andtrain_test_split
used under the hood.train
module intotrain_serial
andtrain_parallel
.The plotting scripts are still kind of messy with many code redundancies. I will fix this in the future. For now, I would like to prioritize the things required to run the experiments. If the PR is too messy, please just tell me 🙈.
Notes to self:
sklearn
'sRandomForestClassifier
internally uses weighted voting in itspredict()
method, i.e., the predicted class of an input sample is a vote by the trees in the forest, weighted by their probability estimates. The predicted class thus is the one with highest mean probability estimate across the trees. As theDistributedRandomForest
class inspecialcouscous
only implements plain voting, I also implemented plain voting for calculation of the local confusion matrices instead of usingpredict()
in order to ensure consistency and comparability.