-
-
Notifications
You must be signed in to change notification settings - Fork 899
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add training callback to send predictions to WandB table #521
Add training callback to send predictions to WandB table #521
Conversation
Today's progress update: https://www.loom.com/share/1d1cb34d846440e2a5258fd3593a9e80 |
prompt_encoding = tokenizer( | ||
prompt_texts, padding=True, return_tensors="pt" | ||
).to(self.cfg.device) | ||
predictions = trainer.model.generate( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
trainer.prediction_step(...)
might be easier to use
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was. However, I had that previously and it appeared to output strange predictions. At one point I actually had both to compare and only model.generate
was useful.
I'm struggling to find the WandB report. I'll try again and see how it goes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added both now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, the pre-commit checks are failing, just run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
PS. Whenever you merge please squash the commits. Normally I’d have nice atomic/semantically sensible commits, although this PR had a ton of quick trial and error and saving WIP on remote GPU servers smile😁 |
So how do I enable this and where do I set the prompts I want it to inference |
The new options (copied from the README, section eval_table_size: # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0
eval_table_max_new_tokens: # Total number of tokens generated for predictions sent to wandb. Default is 128 For example, eval_table_size: 5
eval_table_max_new_tokens: 64 will:
The examples are currently extracted from the eval dataset, which is automatically set aside via the # How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc. 0 for no eval.
val_set_size: 0.04 In the future, a separate dataset entirely could be used, providing more control. Hope this helps! Let us know how it goes testing it, thanks! |
I see, ok. I am hoping to have a seperate dataset of prompts mainly because afaik axolotl does not allow setting a seperate eval dataset, it just comes randomly from % of the dataset to train on, and I have a very specific set of prompts not in the training data I need to eval with, I also dont use normal evals at all, because I have a very specific dataset that needs every entry trained on, and I dont trust eval loss anyways, so I'd rather not section some % of my dataset for evals |
…cloud#521) * WIP Add training callback to send predictions to WandB table * WIP improve wandb table reporting callback * WIP improve wandb table reporting callback (cont) * Add VSCode launching for debugging * Add tiny llama example * WIP attempt to improve post-eval prediction generation for table * WIP attempt to improve post-eval prediction generation for table - part 2 * WIP batch generation * WIP attempt to handle sample_packing using position_ids for wandb prediction table * WIP add code for debugging * Fix sample_packing support for wandb prediction table * Clean up code for PR review * Add eval_table_size, eval_table_max_new_tokens configs & clean up code * Clean up PR, delete VSCode config, add tiny-llama example * Add eval_table_size, eval_table_max_new_tokens documentation. Fix linting/formatting
* WIP Add training callback to send predictions to WandB table * WIP improve wandb table reporting callback * WIP improve wandb table reporting callback (cont) * Add VSCode launching for debugging * Add tiny llama example * WIP attempt to improve post-eval prediction generation for table * WIP attempt to improve post-eval prediction generation for table - part 2 * WIP batch generation * WIP attempt to handle sample_packing using position_ids for wandb prediction table * WIP add code for debugging * Fix sample_packing support for wandb prediction table * Clean up code for PR review * Add eval_table_size, eval_table_max_new_tokens configs & clean up code * Clean up PR, delete VSCode config, add tiny-llama example * Add eval_table_size, eval_table_max_new_tokens documentation. Fix linting/formatting
How to use: #521 (comment)
Closes #490
What's New?
See examples:
https://www.loom.com/share/acaa23516b524aa29328b87b90f82599?sid=e2796761-d398-46a8-96a9-7f965b77437c
How to configure
1️⃣ Enable WandB
2️⃣ Every Eval will push updates to WandB
eval_steps
in config as desiredTasks