-2. Follow the instructions to build an environment
+2. Follow the instructions to build and activate an environment
-3. Activate the environment
+4. Install the latest `nx-cugraph` by following the [Installation Guide](installation.md)
-4. Install the latest `nx-cugraph` by following the [guide](installation.md)
-
-5. Follow the instructions written in the README here: `cugraph/benchmarks/nx-cugraph/pytest-based/`
+5. Follow the instructions written in the README [here](https://github.com/rapidsai/cugraph/blob/HEAD/benchmarks/nx-cugraph/pytest-based)
diff --git a/docs/cugraph/source/nx_cugraph/faqs.md b/docs/cugraph/source/nx_cugraph/faqs.md
deleted file mode 100644
index dee943d1908..00000000000
--- a/docs/cugraph/source/nx_cugraph/faqs.md
+++ /dev/null
@@ -1,5 +0,0 @@
-# FAQ
-
- > **1. Is `nx-cugraph` able to run across multiple GPUs?**
-
-nx-cugraph currently does not support multi-GPU. Multi-GPU support may be added to a future release of nx-cugraph, but consider [cugraph](https://docs.rapids.ai/api/cugraph/stable) for multi-GPU accelerated graph analytics in Python today.
diff --git a/docs/cugraph/source/nx_cugraph/how-it-works.md b/docs/cugraph/source/nx_cugraph/how-it-works.md
index f9dc5af67ac..5696688d1b5 100644
--- a/docs/cugraph/source/nx_cugraph/how-it-works.md
+++ b/docs/cugraph/source/nx_cugraph/how-it-works.md
@@ -4,35 +4,29 @@ NetworkX has the ability to **dispatch function calls to separately-installed th
NetworkX backends let users experience improved performance and/or additional functionality without changing their NetworkX Python code. Examples include backends that provide algorithm acceleration using GPUs, parallel processing, graph database integration, and more.
-While NetworkX is a pure-Python implementation with minimal to no dependencies, backends may be written in other languages and require specialized hardware and/or OS support, additional software dependencies, or even separate services. Installation instructions vary based on the backend, and additional information can be found from the individual backend project pages listed in the NetworkX Backend Gallery.
-
+While NetworkX is a pure-Python implementation, backends may be written to use other libraries and even specialized hardware. `nx-cugraph` is a NetworkX backend that uses RAPIDS cuGraph and NVIDIA GPUs to significantly improve NetworkX performance.
![nxcg-execution-flow](../_static/nxcg-execution-diagram.jpg)
## Enabling nx-cugraph
-NetworkX will use nx-cugraph as the graph analytics backend if any of the
-following are used:
+It is recommended to use `networkx>=3.4` for optimal zero code change performance, but `nx-cugraph` will also work with `networkx 3.0+`.
-### `NETWORKX_BACKEND_PRIORITY` environment variable.
+NetworkX will use `nx-cugraph` as the backend if any of the following are used:
-The `NETWORKX_BACKEND_PRIORITY` environment variable can be used to have NetworkX automatically dispatch to specified backends. This variable can be set to a single backend name, or a comma-separated list of backends ordered using the priority which NetworkX should try. If a NetworkX function is called that nx-cugraph supports, NetworkX will redirect the function call to nx-cugraph automatically, or fall back to the next backend in the list if provided, or run using the default NetworkX implementation. See [NetworkX Backends and Configs](https://networkx.org/documentation/stable/reference/backends.html).
+### `NX_CUGRAPH_AUTOCONFIG` environment variable.
-For example, this setting will have NetworkX use nx-cugraph for any function called by the script supported by nx-cugraph, and the default NetworkX implementation for all others.
-```
-bash> NETWORKX_BACKEND_PRIORITY=cugraph python my_networkx_script.py
-```
+The `NX_CUGRAPH_AUTOCONFIG` environment variable can be used to configure NetworkX for full zero code change acceleration using `nx-cugraph`. If a NetworkX function is called that `nx-cugraph` supports, NetworkX will redirect the function call to `nx-cugraph` automatically, or fall back to either another backend if enabled or the default NetworkX implementation. See the [NetworkX documentation on backends](https://networkx.org/documentation/stable/reference/backends.html) for configuring NetworkX manually.
-This example will have NetworkX use nx-cugraph for functions it supports, then try other_backend if nx-cugraph does not support them, and finally the default NetworkX implementation if not supported by either backend:
```
-bash> NETWORKX_BACKEND_PRIORITY="cugraph,other_backend" python my_networkx_script.py
+bash> NX_CUGRAPH_AUTOCONFIG=True python my_networkx_script.py
```
### `backend=` keyword argument
To explicitly specify a particular backend for an API, use the `backend=`
keyword argument. This argument takes precedence over the
-`NETWORKX_BACKEND_PRIORITY` environment variable. This requires anyone
+`NX_CUGRAPH_AUTOCONFIG` environment variable. This requires anyone
running code that uses the `backend=` keyword argument to have the specified
backend installed.
@@ -49,9 +43,9 @@ requires the user to write code for a specific backend, and therefore requires
the backend to be installed, but has the advantage of ensuring a particular
behavior without the potential for runtime conversions.
-To use type-based dispatching with nx-cugraph, the user must import the backend
+To use type-based dispatching with `nx-cugraph`, the user must import the backend
directly in their code to access the utilities provided to create a Graph
-instance specifically for the nx-cugraph backend.
+instance specifically for the `nx-cugraph` backend.
Example:
```python
@@ -59,7 +53,10 @@ import networkx as nx
import nx_cugraph as nxcg
G = nx.Graph()
-...
+
+# populate the graph
+# ...
+
nxcg_G = nxcg.from_networkx(G) # conversion happens once here
nx.betweenness_centrality(nxcg_G, k=1000) # nxcg Graph type causes cugraph backend
# to be used, no conversion necessary
@@ -84,31 +81,33 @@ G = nx.from_pandas_edgelist(df, source="src", target="dst")
Run the command:
```
user@machine:/# ipython bc_demo.ipy
+
+CPU times: user 7min 36s, sys: 5.22 s, total: 7min 41s
+Wall time: 7min 41s
```
You will observe a run time of approximately 7 minutes...more or less depending on your CPU.
Run the command again, this time specifying cugraph as the NetworkX backend.
+```bash
+user@machine:/# NX_CUGRAPH_AUTOCONFIG=True ipython bc_demo.ipy
+
+CPU times: user 4.14 s, sys: 1.13 s, total: 5.27 s
+Wall time: 5.32 s
```
-user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
-```
-This run will be much faster, typically around 20 seconds depending on your GPU.
-```
-user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
-```
-There is also an option to cache the graph conversion to GPU. This can dramatically improve performance when running multiple algorithms on the same graph. Caching is enabled by default for NetworkX versions 3.4 and later, but if using an older version, set "NETWORKX_CACHE_CONVERTED_GRAPHS=True"
-```
-NETWORKX_BACKEND_PRIORITY=cugraph NETWORKX_CACHE_CONVERTED_GRAPHS=True ipython bc_demo.ipy
-```
+This run will be much faster, typically around 5 seconds depending on your GPU.
-When running Python interactively, the cugraph backend can be specified as an argument in the algorithm call.
+
-For example:
-```
-nx.betweenness_centrality(cit_patents_graph, k=k, backend="cugraph")
-```
+*Note, the examples above were run using the following specs*:
+ *NetworkX 3.4*
+ *nx-cugraph 24.10*
+ *CPU: Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz 45GB RAM*
+ *GPU: NVIDIA Quadro RTX 8000 80GB RAM*
-The latest list of algorithms supported by nx-cugraph can be found [here](https://github.com/rapidsai/cugraph/blob/HEAD/python/nx-cugraph/README.md#algorithms) or in the next section.
+
---
+
+The latest list of algorithms supported by `nx-cugraph` can be found in [GitHub](https://github.com/rapidsai/cugraph/blob/HEAD/python/nx-cugraph/README.md#algorithms), or in the [Supported Algorithms Section](supported-algorithms.md).
diff --git a/docs/cugraph/source/nx_cugraph/index.rst b/docs/cugraph/source/nx_cugraph/index.rst
index 110300c1836..730958a5b73 100644
--- a/docs/cugraph/source/nx_cugraph/index.rst
+++ b/docs/cugraph/source/nx_cugraph/index.rst
@@ -1,9 +1,13 @@
nx-cugraph
-----------
-nx-cugraph is a `NetworkX backend `_ that provides **GPU acceleration** to many popular NetworkX algorithms.
+``nx-cugraph`` is a NetworkX backend that provides **GPU acceleration** to many popular NetworkX algorithms.
-By simply `installing and enabling nx-cugraph `_, users can see significant speedup on workflows where performance is hindered by the default NetworkX implementation. With ``nx-cugraph``, users can have GPU-based, large-scale performance **without** changing their familiar and easy-to-use NetworkX code.
+By simply `installing and enabling nx-cugraph `_, users can see significant speedup on workflows where performance is hindered by the default NetworkX implementation.
+
+Users can have GPU-based, large-scale performance **without** changing their familiar and easy-to-use NetworkX code.
+
+.. centered:: Timed result from running the following code snippet (called ``demo.ipy``, showing NetworkX with vs. without ``nx-cugraph``)
.. code-block:: python
@@ -16,6 +20,21 @@ By simply `installing and enabling nx-cugraph `_ to get up-and-running with ``nx-c
:caption: Contents:
how-it-works
- supported-algorithms
installation
+ supported-algorithms
benchmarks
- faqs
diff --git a/docs/cugraph/source/nx_cugraph/installation.md b/docs/cugraph/source/nx_cugraph/installation.md
index 8d221f16fec..a816801d001 100644
--- a/docs/cugraph/source/nx_cugraph/installation.md
+++ b/docs/cugraph/source/nx_cugraph/installation.md
@@ -1,4 +1,4 @@
-# Getting Started
+# Installing nx-cugraph
This guide describes how to install ``nx-cugraph`` and use it in your workflows.
@@ -10,11 +10,11 @@ This guide describes how to install ``nx-cugraph`` and use it in your workflows.
- **Volta architecture or later NVIDIA GPU, with [compute capability](https://developer.nvidia.com/cuda-gpus) 7.0+**
- **[CUDA](https://docs.nvidia.com/cuda/index.html) 11.2, 11.4, 11.5, 11.8, 12.0, 12.2, or 12.5**
- **Python >= 3.10**
- - **[NetworkX](https://networkx.org/documentation/stable/install.html#) >= 3.0 (version 3.2 or higher recommended)**
+ - **[NetworkX](https://networkx.org/documentation/stable/install.html#) >= 3.0 (version 3.4 or higher recommended)**
More details about system requirements can be found in the [RAPIDS System Requirements Documentation](https://docs.rapids.ai/install#system-req).
-## Installing nx-cugraph
+## Installing Packages
Read the [RAPIDS Quick Start Guide](https://docs.rapids.ai/install) to learn more about installing all RAPIDS libraries.
diff --git a/docs/cugraph/source/nx_cugraph/supported-algorithms.rst b/docs/cugraph/source/nx_cugraph/supported-algorithms.rst
index b21ef7bb668..8f57c02b240 100644
--- a/docs/cugraph/source/nx_cugraph/supported-algorithms.rst
+++ b/docs/cugraph/source/nx_cugraph/supported-algorithms.rst
@@ -2,7 +2,7 @@ Supported Algorithms
=====================
The nx-cugraph backend to NetworkX connects
-`pylibcugraph <../../readme_pages/pylibcugraph.md>`_ (cuGraph's low-level Python
+`pylibcugraph `_ (cuGraph's low-level Python
interface to its CUDA-based graph analytics library) and
`CuPy `_ (a GPU-accelerated array library) to NetworkX's
familiar and easy-to-use API.
@@ -209,6 +209,40 @@ Algorithms
| is_tree |
+---------------------+
+
+Utilities
+-------
+
++-------------------------+
+| **Classes** |
++=========================+
+| is_negatively_weighted |
++-------------------------+
+
++----------------------+
+| **Convert** |
++======================+
+| from_dict_of_lists |
++----------------------+
+| to_dict_of_lists |
++----------------------+
+
++--------------------------+
+| **Convert Matrix** |
++==========================+
+| from_pandas_edgelist |
++--------------------------+
+| from_scipy_sparse_array |
++--------------------------+
+
++-----------------------------------+
+| **Relabel** |
++===================================+
+| convert_node_labels_to_integers |
++-----------------------------------+
+| relabel_nodes |
++-----------------------------------+
+
Generators
------------
@@ -316,39 +350,6 @@ Generators
| les_miserables_graph |
+-------------------------------+
-Other
--------
-
-+-------------------------+
-| **Classes** |
-+=========================+
-| is_negatively_weighted |
-+-------------------------+
-
-+----------------------+
-| **Convert** |
-+======================+
-| from_dict_of_lists |
-+----------------------+
-| to_dict_of_lists |
-+----------------------+
-
-+--------------------------+
-| **Convert Matrix** |
-+==========================+
-| from_pandas_edgelist |
-+--------------------------+
-| from_scipy_sparse_array |
-+--------------------------+
-
-+-----------------------------------+
-| **Relabel** |
-+===================================+
-| convert_node_labels_to_integers |
-+-----------------------------------+
-| relabel_nodes |
-+-----------------------------------+
-
To request nx-cugraph backend support for a NetworkX API that is not listed
above, visit the `cuGraph GitHub repo `_.
diff --git a/docs/cugraph/source/top_toc.rst b/docs/cugraph/source/top_toc.rst
deleted file mode 100644
index 8e31e70ca78..00000000000
--- a/docs/cugraph/source/top_toc.rst
+++ /dev/null
@@ -1,13 +0,0 @@
-.. toctree::
- :maxdepth: 2
- :caption: cuGraph documentation Contents:
- :name: top_toc
-
- basics/index
- nx_cugraph/index
- installation/index
- tutorials/index
- graph_support/index
- wholegraph/index
- references/index
- api_docs/index
diff --git a/docs/cugraph/source/wholegraph/installation/container.md b/docs/cugraph/source/wholegraph/installation/container.md
index 3a2c627c56a..6aac53cf88f 100644
--- a/docs/cugraph/source/wholegraph/installation/container.md
+++ b/docs/cugraph/source/wholegraph/installation/container.md
@@ -24,6 +24,7 @@ RUN pip3 install Cython setuputils3 scikit-build nanobind pytest-forked pytest
To run GNN applications, you may also need cuGraphOps, DGL and/or PyG libraries to run the GNN layers.
You may refer to [DGL](https://www.dgl.ai/pages/start.html) or [PyG](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html)
For example, to install DGL, you may need to add:
+
```dockerfile
-RUN pip3 install dgl -f https://data.dgl.ai/wheels/cu118/repo.html
+RUN pip3 install dgl -f https://data.dgl.ai/wheels/torch-2.3/cu118/repo.html
```
diff --git a/python/cugraph-dgl/README.md b/python/cugraph-dgl/README.md
index ac4cb2f6253..013d4fe5e2e 100644
--- a/python/cugraph-dgl/README.md
+++ b/python/cugraph-dgl/README.md
@@ -8,9 +8,12 @@
Install and update cugraph-dgl and the required dependencies using the command:
-```
-conda install mamba -n base -c conda-forge
-mamba install cugraph-dgl -c rapidsai-nightly -c rapidsai -c pytorch -c conda-forge -c nvidia -c dglteam
+```shell
+# CUDA 11
+conda install -c rapidsai -c pytorch -c conda-forge -c nvidia -c dglteam/label/th23_cu118 cugraph-dgl
+
+# CUDA 12
+conda install -c rapidsai -c pytorch -c conda-forge -c nvidia -c dglteam/label/th23_cu121 cugraph-dgl
```
## Build from Source
diff --git a/python/cugraph-dgl/conda/cugraph_dgl_dev_cuda-118.yaml b/python/cugraph-dgl/conda/cugraph_dgl_dev_cuda-118.yaml
index 42cbcab5008..174012b8f8c 100644
--- a/python/cugraph-dgl/conda/cugraph_dgl_dev_cuda-118.yaml
+++ b/python/cugraph-dgl/conda/cugraph_dgl_dev_cuda-118.yaml
@@ -4,13 +4,12 @@ channels:
- rapidsai
- rapidsai-nightly
- dask/label/dev
-- pyg
-- dglteam/label/cu118
+- dglteam/label/th23_cu118
- conda-forge
- nvidia
dependencies:
- cugraph==24.12.*,>=0.0.0a0
-- dgl>=1.1.0.cu*
+- dgl>=2.4.0.cu*
- pandas
- pre-commit
- pylibcugraphops==24.12.*,>=0.0.0a0
diff --git a/python/cugraph-pyg/conda/cugraph_pyg_dev_cuda-118.yaml b/python/cugraph-pyg/conda/cugraph_pyg_dev_cuda-118.yaml
index 39b1ab21edb..4778ff0eaf6 100644
--- a/python/cugraph-pyg/conda/cugraph_pyg_dev_cuda-118.yaml
+++ b/python/cugraph-pyg/conda/cugraph_pyg_dev_cuda-118.yaml
@@ -4,15 +4,13 @@ channels:
- rapidsai
- rapidsai-nightly
- dask/label/dev
-- pyg
-- dglteam/label/cu118
+- dglteam/label/th23_cu118
- conda-forge
- nvidia
dependencies:
- cugraph==24.12.*,>=0.0.0a0
- pandas
- pre-commit
-- pyg>=2.5,<2.6
- pylibcugraphops==24.12.*,>=0.0.0a0
- pytest
- pytest-benchmark
@@ -20,6 +18,7 @@ dependencies:
- pytest-xdist
- pytorch-cuda==11.8
- pytorch>=2.3,<2.4.0a0
+- pytorch_geometric>=2.5,<2.6
- scipy
- tensordict>=0.1.2
name: cugraph_pyg_dev_cuda-118
diff --git a/python/nx-cugraph/_nx_cugraph/__init__.py b/python/nx-cugraph/_nx_cugraph/__init__.py
index a5e45979fe2..fc0bea47180 100644
--- a/python/nx-cugraph/_nx_cugraph/__init__.py
+++ b/python/nx-cugraph/_nx_cugraph/__init__.py
@@ -301,6 +301,45 @@ def get_info():
.lower()
== "true",
}
+
+ # Enable zero-code change usage with a simple environment variable
+ # by setting or updating other NETWORKX environment variables.
+ if os.environ.get("NX_CUGRAPH_AUTOCONFIG", "").strip().lower() == "true":
+ from itertools import chain
+
+ def update_env_var(varname):
+ """Add "cugraph" to a list of backend names environment variable."""
+ if varname not in os.environ:
+ os.environ[varname] = "cugraph"
+ return
+ string = os.environ[varname]
+ vals = [
+ stripped for x in string.strip().split(",") if (stripped := x.strip())
+ ]
+ if "cugraph" not in vals:
+ # Should we append or prepend? Let's be first!
+ os.environ[varname] = ",".join(chain(["cugraph"], vals))
+
+ # Automatically convert NetworkX Graphs to nx-cugraph for algorithms
+ if (varname := "NETWORKX_BACKEND_PRIORITY_ALGOS") in os.environ:
+ # "*_ALGOS" is given priority in NetworkX >=3.4
+ update_env_var(varname)
+ # But update this too to "just work" if users mix env vars and nx versions
+ os.environ["NETWORKX_BACKEND_PRIORITY"] = os.environ[varname]
+ else:
+ update_env_var("NETWORKX_BACKEND_PRIORITY")
+ # And for older NetworkX versions
+ update_env_var("NETWORKX_AUTOMATIC_BACKENDS") # For NetworkX 3.2
+ update_env_var("NETWORKX_GRAPH_CONVERT") # For NetworkX 3.0 and 3.1
+ # Automatically create nx-cugraph Graph from graph generators
+ update_env_var("NETWORKX_BACKEND_PRIORITY_GENERATORS")
+ # Run default NetworkX implementation (in >=3.4) if not implemented by nx-cugraph
+ if (varname := "NETWORKX_FALLBACK_TO_NX") not in os.environ:
+ os.environ[varname] = "true"
+ # Cache graph conversions (default is False in NetworkX 3.2
+ if (varname := "NETWORKX_CACHE_CONVERTED_GRAPHS") not in os.environ:
+ os.environ[varname] = "true"
+
return d
diff --git a/readme_pages/pylibcugraph.md b/readme_pages/pylibcugraph.md
index 3bb552141e9..fcb5a624931 100644
--- a/readme_pages/pylibcugraph.md
+++ b/readme_pages/pylibcugraph.md
@@ -4,7 +4,7 @@
-CuGraph pylibcugraph
+cuGraph pylibcugraph
Part of [RAPIDS](https://rapids.ai) cuGraph, pylibcugraph is a wrapper around the cuGraph C API. It is aimed more at integrators instead of algorithm writers or end users like Data Scientists. Most of the cuGraph python API uses pylibcugraph to efficiently run algorithms by removing much of the overhead of the python-centric implementation, relying more on cython instead. Pylibcugraph is intended for applications that require a tighter integration with cuGraph at the Python layer with fewer dependencies.