-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a general torch CompositionModel #280
Conversation
c9a93a4
to
2a779dc
Compare
2a779dc
to
95c9153
Compare
ceca9d8
to
7a8d131
Compare
7a8d131
to
9a7718b
Compare
d5151dd
to
7eda27b
Compare
7eda27b
to
0b35fed
Compare
…ot in the training dataset
407a323
to
661ebe5
Compare
cdbc2de
to
7a406d1
Compare
c739443
to
5318cae
Compare
I worked around one issue in the docs (metatensor/metatensor#728), but there seems to be another one now
I wonder if that's related to the release of torch v2.4.1. |
Hmm, actually, this can be caused by PYTORCH_JIT=O as well. I'll find another wokraround |
5318cae
to
d29920c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool. Thanks for trying to finish this.
Once the CI is fixed I can give some more comments.
if selected_atoms is not None: | ||
composition_energies[output_key] = metatensor.torch.slice( | ||
composition_energies[output_key], "samples", selected_atoms | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can also move this inside the CompositionModel itself?
there is one solution in the LJ example of Guillaume.
I recreated the system in a code.
# select only real atoms and discard ghosts
if selected_atoms is not None:
current_system_mask = selected_atoms.column("system") == system_i
current_atoms = selected_atoms.column("atom")
current_atoms = current_atoms[current_system_mask].to(torch.long)
system_final = System(
types=system.types[current_atoms],
positions=system.positions[current_atoms],
cell=system.cell,
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I did it in a slightly different way to avoid label conflicts but it's now there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes nice implementation.
from ....utils.data.dataset import Dataset, get_atomic_types | ||
|
||
|
||
def calculate_composition_weights( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is not needed anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately this needs to be kept for the alchemical model (notice that it changed directories), which works in a way that doesn't allow me to change things without changing the alchemical model code itself
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay!
if not self.training: | ||
# at evaluation, we also add the composition contributions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NICE!
Closes #269
Creates an API compatible
torch.nn.Module
for fitting and predicting compositions weights based on thecalculate_composition_weights()
function.Currently, this is the bare implementation and before we include this in the other architectures, I would like to have some feedback.
Contributor (creator of pull-request) checklist
alchemical_model
GAP
SOAP-BPNN
📚 Documentation preview 📚: https://metatrain--280.org.readthedocs.build/en/280/