-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Multi-task Learning] Add multi-task trainer #849
Merged
classicsong
merged 90 commits into
awslabs:multi-task
from
classicsong:multi-task-trainer
May 30, 2024
Merged
[Multi-task Learning] Add multi-task trainer #849
classicsong
merged 90 commits into
awslabs:multi-task
from
classicsong:multi-task-trainer
May 30, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
classicsong
added
ready
able to trigger the CI
and removed
draft
label only to be used by dev team - skips CI for small changes
labels
May 27, 2024
zhjwy9343
reviewed
May 29, 2024
zhjwy9343
reviewed
May 29, 2024
zhjwy9343
reviewed
May 30, 2024
zhjwy9343
approved these changes
May 30, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
classicsong
added a commit
that referenced
this pull request
May 31, 2024
*Issue #, if available:* #789 *Description of changes:* Add support for multi-task learning. Users can define multiple tasks in the same training loop. A task can be a node classification, node regression, edge classification, edge regression or link prediction task. For each node classification or node regression task, it should be defined on a single node type with one label field. But users can define multiple node classification or regression tasks on the same node type. For each edge classification or node regression task, it should be defined on a single edge type with one label field. But users can define multiple edge classification or regression tasks on the same edge type. For link prediction, users can define prediction targets on multiple edge types. ### Graph construction Update GraphStorm input config parsing to support multi-task learning. Allow user to specify multiple training tasks for a training job through yaml file. By providing the `multi_task_learning` configurations in the yaml file, users can define multiple training tasks. The following config defines two training tasks, one for node classification and one for edge classification. ``` --- version: 1.0 gsf: basic: ... ... multi_task_learning: - node_classification: target_ntype: "movie" label_field: "label" mask_fields: - "train_mask_field_nc" - "val_mask_field_nc" - "test_mask_field_nc" task_weight: 1.0 - edge_classification: target_etype: - "user,rating,movie" label_field: "rate" mask_fields: - "train_mask_field_ec" - "val_mask_field_ec" - "test_mask_field_ec" task_weight: 0.5 # weight of the task ``` Task specific hyperparameters in multi-task learning are same as thoses in single task learning, except that two new configs are required, i.e., mask_fields and task_weight. The mask_fields provides the training, validation and test masks for the task and the task_weight gives its loss weight. ### DataLoader for multi-task learning Add GSgnnMultiTaskDataLoader to support multi-task learning. When initializing a GSgnnMultiTaskDataLoader, users need to provide two inputs: 1) a list of config.TaskInfo objects recording the information of each task and 2) a list of dataloaders corresponding to each training task. During training for each iteration, GSgnnMultiTaskDataLoader will iteratively call each task-dataloader to generate a mini-batch and finally return a list of mini-batches to the trainer. The length of the dataloader (number of batches for an epoch) is determined by the largest task in the GSgnnMultiTaskDataLoader. #834 ### Evaluator for multi-task learning GSgnnMultiTaskEvaluator accepts a set of Evaluators, in the format of dict ({task_id: Evaluator, ...}) as input to initialize the multi-task evaluator. When doing evaluation, it accepts three arguements val_results, test_results and total_iters. The val_results and test_results will be dicts in the format of {task_id_0: reslut, task_id_1: result}. The GSgnnMultiTaskEvaluator will call task specify evaluators for each task to compute the evaluation scores. #837 ### Refactor graphstorm.model for multi-task learning As multi-task learning trainer will invoke edge_mini_batch_predict, lp_mini_batch_predict and node_mini_batch_predict when conducting evaluation or testing, refactor the code to allow the functions to work with different decoders. #843 ### Add GSgnnMultiTaskSharedEncoderModel GSgnnMultiTaskSharedEncoderModel allows multiple tasks to share the same GNN encoder but have separate decoders for each task. #855 ### Add Multi-task entrypoint #849 By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. --------- Co-authored-by: Xiang Song <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue #, if available:
Description of changes:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.