Skip to content

Commit

Permalink
[Doc] Add return_proba argument explanation (#536)
Browse files Browse the repository at this point in the history
*Issue #, if available:*

*Description of changes:*
This PR fix the documentation of `return_proba`:
- Add the return_proba argument;
- Add a note on eval_metric.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

---------

Co-authored-by: Ubuntu <[email protected]>
  • Loading branch information
zhjwy9343 and Ubuntu authored Oct 10, 2023
1 parent 8163a08 commit 5d4fd6a
Showing 1 changed file with 6 additions and 2 deletions.
8 changes: 6 additions & 2 deletions docs/source/configuration/configuration-run.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ GraphStorm's `graphstorm.run.launch <https://github.com/awslabs/graphstorm/blob/
- NCCL_DEBUG=INFO
- **lm-encoder-only**: Indicate that the model is using language model + decoder only. model. No GNN is involved, only graph structure.

.. note:: Below configurations can be set either in a YAML configuraiton file or be added as arguments of launch command.
.. note:: Below configurations can be set either in a YAML configuration file or be added as arguments of launch command.

Environment Configurations
-------------------------------------
Expand Down Expand Up @@ -275,7 +275,7 @@ General Configurations
- Yaml: ``task_type: node_classification``
- Argument: ``--task-type node_classification``
- Default value: This parameter must be provided by user.
- **eval_metric**: Evaluation metric used during evaluation. The input can be a string specifying the evaluation metric to report or a list of strings specifying a list of evaluation metrics to report. The first evaluation metric is treated as the major metric and is used to choose the best trained model. The supported evaluation metrics of classification tasks include ``accuracy``, ``precision_recall``, ``roc_auc``, ``f1_score``, ``per_class_f1_score``. The supported evaluation metrics of regression tasks include ``rmse``, ``mse`` and ``mae``. The supported evaluation metrics of link prediction tasks include ``mrr``.
- **eval_metric**: Evaluation metric used during evaluation. The input can be a string specifying the evaluation metric to report or a list of strings specifying a list of evaluation metrics to report. The first evaluation metric is treated as the major metric and is used to choose the best trained model. The supported evaluation metrics of classification tasks include ``accuracy``, ``precision_recall``, ``roc_auc``, ``f1_score``, ``per_class_f1_score``. The supported evaluation metrics of regression tasks include ``rmse``, ``mse`` and ``mae``. The supported evaluation metrics of link prediction tasks include ``mrr``.
- Yaml: ``eval_metric:``
| ``- accuracy``
| ``- precision_recall``
Expand Down Expand Up @@ -307,6 +307,10 @@ Classification and Regression Task
- Yaml: ``imbalance_class_weights: 0.1,0.2,0.3``
- Argument: ``--imbalance-class-weights 0.1,0.2,0.3``
- Default value: ``None``
- **return_proba**: For classification task, this configuration determines whether to return probability estimates for each class or the maximum probable class. Set `true`` to return probability estimates and `false` to return the maximum probable class.
- Yaml: ``return_proba: true``
- Argument: ``--return_proba true``
- Default value: ``true``
- **save_prediction_path**: Path to save prediction results. This is used in node/edge classification/regression inference.
- Yaml: ``save_prediction_path: /data/infer-output/predictions/``
- Argument: ``--save-prediction-path /data/infer-output/predictions/``
Expand Down

0 comments on commit 5d4fd6a

Please sign in to comment.