Skip to content

Commit

Permalink
Update graphstorm.initialize. (#781)
Browse files Browse the repository at this point in the history
*Issue #, if available:*
#754 

*Description of changes:*
Change the default setting of graphstorm.initialize().


By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

Co-authored-by: Xiang Song <[email protected]>
  • Loading branch information
classicsong and Xiang Song authored Mar 22, 2024
1 parent 317584e commit 37135af
Showing 1 changed file with 17 additions and 2 deletions.
19 changes: 17 additions & 2 deletions python/graphstorm/gsf.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,17 +65,32 @@
LinkPredictWeightedDistMultDecoder)
from .tracker import get_task_tracker_class

def initialize(ip_config, backend, use_wholegraph=False):
def initialize(ip_config=None, backend='gloo', use_wholegraph=False):
""" Initialize distributed training and inference context.
.. code::
# Standalone mode
import graphstorm as gs
gs.initialize()
.. code::
# distributed mode
import graphstorm as gs
gs.initialize(ip_config="/tmp/ip_list.txt", backend="gloo")
Parameters
----------
ip_config: str
File path of ip_config file, e.g., `/tmp/ip_list.txt`.
File path of ip_config file, e.g., `/tmp/ip_list.txt`
Default: None
backend: str
Torch distributed backend, e.g., ``gloo`` or ``nccl``.
Default: 'gloo'
use_wholegraph: bool
Whether to use wholegraph for feature transfer.
Default: False
"""
# We need to use socket for communication in DGL 0.8. The tensorpipe backend has a bug.
# This problem will be fixed in the future.
Expand Down

0 comments on commit 37135af

Please sign in to comment.