-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added predict_ensemble to all model classes #1
Conversation
…refactor script yet
@@ -16,7 +16,7 @@ | |||
from argparse import ArgumentParser | |||
|
|||
from ptype.callbacks import MetricsCallback | |||
from ptype.data import load_ptype_data_day, preprocess_data | |||
from ptype.data import load_ptype_uq, preprocess_data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we want to make ptype a dependency of miles-guess. Can you copy the functions over to this file instead of importing them from ptype?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had the load_ptype_uq here but moved it to ptype recently :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing I could do is try to import ptype, except import error, then pip install ptype? The training scripts still depend on the models in other ways (such as how we split the data, preprocess, etc).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can do optional dependencies in the pyproject.toml file, so I would suggest throwing an exception on the import and adding some helpful error message to install ptype or run the pip install including optional dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made some comments regarding the categorical evidential model which I've been using with p-type.
|
||
class CategoricalDNN(object): | ||
|
||
""" | ||
A Dense Neural Network Model that can support arbitrary numbers of hidden layers. | ||
Attributes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some suggestions on the Categorical Evidential model based on some test usage:
- Can remove the
classifier
argument from the init as it is not used - The
callbacks
andannealing_coefficient
is a bit wonky. Currently with a coefficient of 34, for example, you need bothannealing_coefficient=34
andcallbacks=[ReportEpoch(34)]
which is awkward. Perhaps you add the callback behind the scenes but still allow other callbacks to be appended. - Currently, to use the evidential model, you need to set the
loss="dirichlet"
andoutput_activation="linear"
. It would be nice if we could maybe label the loss "evidential" and do the output layer in the background. - The
class_weights
argument has no effect (even setting them all to zero produces same output). I know that line is commented out here, but I have tested this. I do not understand why it doesn't work. Also, in some of the other model classes, you use the weights with the argumentloss_weights=
in the model compile stage - I don't believe this is the same asclass_weights
in the fit stage. I will make a separate comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am going to merge this PR soon and open a new one focused on the Categorical model! Hopefully it should be far fewer files to look at compared to this one.
optimizer=self.optimizer_obj, | ||
loss=self.loss, | ||
loss_weights=self.loss_weights, | ||
metrics=metrics, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The loss_weights
argument here is not the same as class_weights
in the .fit()
method which I think is your intention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will test again to see if this actually works (despite keras saying it does); if not it will be removed. But I will add class_weights as an option for when the model is a standard classifier and not evidential (we could enable the experimental class weights I put into the dirichet loss, but I cannot confirm that is really correct ...)
y_prob = model_instance.predict(x, batch_size=batch_size) | ||
predictions[i + 1] = y_prob | ||
|
||
return predictions | ||
|
||
def compute_uncertainties(self, y_pred, num_classes=4): | ||
return calc_prob_uncertainty(y_pred, num_classes=num_classes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the other model classes, I think the calc_prob_uncertainties()
is a method within the class, and here it is a function outside the class. It would be better if they were consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are going to merge in the current changes and address Charlie's categorical comments and remaining refactoring issues in another pull request.
Initiate this_epoch_num variable in ReportEpoch and DirichletEvidentialLoss class
No description provided.