Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make cugraph-ops optional for cugraph-gnn packages #99

Draft
wants to merge 2 commits into
base: branch-25.02
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions conda/environments/all_cuda-118_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,15 +33,13 @@ dependencies:
- pandas
- pre-commit
- pydantic
- pylibcugraphops==25.2.*,>=0.0.0a0
- pylibraft==25.2.*,>=0.0.0a0
- pytest
- pytest-benchmark
- pytest-cov
- pytest-forked
- pytest-xdist
- pytorch-cuda=11.8
- pytorch>=2.3
- pytorch-gpu>=2.3=*cuda118*
- pytorch_geometric>=2.5,<2.6
- raft-dask==25.2.*,>=0.0.0a0
- rapids-build-backend>=0.3.0,<0.4.0.dev0
Expand Down
4 changes: 1 addition & 3 deletions conda/environments/all_cuda-121_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,15 +39,13 @@ dependencies:
- pandas
- pre-commit
- pydantic
- pylibcugraphops==25.2.*,>=0.0.0a0
- pylibraft==25.2.*,>=0.0.0a0
- pytest
- pytest-benchmark
- pytest-cov
- pytest-forked
- pytest-xdist
- pytorch-cuda=12.1
- pytorch>=2.3
- pytorch-gpu>=2.3=*cuda120*
- pytorch_geometric>=2.5,<2.6
- raft-dask==25.2.*,>=0.0.0a0
- rapids-build-backend>=0.3.0,<0.4.0.dev0
Expand Down
4 changes: 1 addition & 3 deletions conda/environments/all_cuda-124_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,15 +39,13 @@ dependencies:
- pandas
- pre-commit
- pydantic
- pylibcugraphops==25.2.*,>=0.0.0a0
- pylibraft==25.2.*,>=0.0.0a0
- pytest
- pytest-benchmark
- pytest-cov
- pytest-forked
- pytest-xdist
- pytorch-cuda=12.4
- pytorch>=2.3
- pytorch-gpu>=2.3=*cuda120*
- pytorch_geometric>=2.5,<2.6
- raft-dask==25.2.*,>=0.0.0a0
- rapids-build-backend>=0.3.0,<0.4.0.dev0
Expand Down
48 changes: 7 additions & 41 deletions dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ files:
- depends_on_dask_cudf
- depends_on_pylibraft
- depends_on_raft_dask
- depends_on_pylibcugraphops
- depends_on_cupy
- depends_on_pytorch
- depends_on_dgl
Expand All @@ -45,7 +44,6 @@ files:
- cuda_version
- docs
- py_version
- depends_on_pylibcugraphops
test_cpp:
output: none
includes:
Expand Down Expand Up @@ -116,7 +114,6 @@ files:
table: project
includes:
- depends_on_cugraph
- depends_on_pylibcugraphops
- python_run_cugraph_dgl
py_test_cugraph_dgl:
output: pyproject
Expand All @@ -142,7 +139,6 @@ files:
table: project
includes:
- depends_on_cugraph
- depends_on_pylibcugraphops
- depends_on_pyg
- python_run_cugraph_pyg
py_test_cugraph_pyg:
Expand All @@ -166,7 +162,6 @@ files:
includes:
- checks
- depends_on_cugraph
- depends_on_pylibcugraphops
- depends_on_dgl
- depends_on_pytorch
- cugraph_dgl_dev
Expand All @@ -180,7 +175,6 @@ files:
- checks
- depends_on_cugraph
- depends_on_pyg
- depends_on_pylibcugraphops
- depends_on_pytorch
- cugraph_pyg_dev
- test_python_common
Expand Down Expand Up @@ -406,7 +400,6 @@ dependencies:
common:
- output_types: [conda]
packages:
- pytorch>=2.3
- torchdata
- pydantic
specific:
Expand All @@ -431,18 +424,16 @@ dependencies:
- *tensordict
- {matrix: null, packages: [*pytorch_pip, *tensordict]}
- output_types: [conda]
# PyTorch will stop publishing conda packages after 2.5.
# Consider switching to conda-forge::pytorch-gpu.
# Note that the CUDA version may differ from the official PyTorch wheels.
matrices:
- matrix: {cuda: "12.1"}
packages:
- pytorch-cuda=12.1
- matrix: {cuda: "12.4"}
- matrix: {cuda: "12.*"}
packages:
- pytorch-cuda=12.4
- matrix: {cuda: "11.8"}
- pytorch-gpu>=2.3=*cuda120*
Copy link
Contributor

@bdice bdice Dec 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already using conda-forge, I think? pytorch-gpu is a conda-forge package, not a pytorch channel package. Also, the latest conda-forge builds are built with CUDA 12.6. CUDA 12.0 is no longer used to build.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For compatibility reasons we may want to stick to older builds of pytorch-gpu (built with cuda120) for now. We will hopefully be able to relax this in the future.

- matrix: {cuda: "11.*"}
packages:
- pytorch-cuda=11.8
# pytorch only supports certain CUDA versions... skip
# adding pytorch-cuda pinning if any other CUDA version is requested
- pytorch-gpu>=2.3=*cuda118*
- matrix:
packages:

Expand Down Expand Up @@ -667,31 +658,6 @@ dependencies:
- pylibcugraph-cu11==25.2.*,>=0.0.0a0
- {matrix: null, packages: [*pylibcugraph_unsuffixed]}

depends_on_pylibcugraphops:
common:
- output_types: conda
packages:
- &pylibcugraphops_unsuffixed pylibcugraphops==25.2.*,>=0.0.0a0
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
- --extra-index-url=https://pypi.nvidia.com
- --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix:
cuda: "12.*"
cuda_suffixed: "true"
packages:
- pylibcugraphops-cu12==25.2.*,>=0.0.0a0
- matrix:
cuda: "11.*"
cuda_suffixed: "true"
packages:
- pylibcugraphops-cu11==25.2.*,>=0.0.0a0
- {matrix: null, packages: [*pylibcugraphops_unsuffixed]}

depends_on_cupy:
common:
- output_types: conda
Expand Down
4 changes: 1 addition & 3 deletions python/cugraph-dgl/conda/cugraph_dgl_dev_cuda-118.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,11 @@ dependencies:
- dglteam/label/th23_cu118::dgl>=2.4.0.th23.cu*
- pre-commit
- pydantic
- pylibcugraphops==25.2.*,>=0.0.0a0
- pytest
- pytest-benchmark
- pytest-cov
- pytest-xdist
- pytorch-cuda=11.8
- pytorch>=2.3
- pytorch-gpu>=2.3=*cuda118*
- tensordict>=0.1.2
- torchdata
name: cugraph_dgl_dev_cuda-118
1 change: 0 additions & 1 deletion python/cugraph-dgl/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@ dependencies = [
"cugraph==25.2.*,>=0.0.0a0",
"numba>=0.57",
"numpy>=1.23,<3.0a0",
"pylibcugraphops==25.2.*,>=0.0.0a0",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.

[project.optional-dependencies]
Expand Down
4 changes: 1 addition & 3 deletions python/cugraph-pyg/conda/cugraph_pyg_dev_cuda-118.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,11 @@ dependencies:
- cugraph==25.2.*,>=0.0.0a0
- pre-commit
- pydantic
- pylibcugraphops==25.2.*,>=0.0.0a0
- pytest
- pytest-benchmark
- pytest-cov
- pytest-xdist
- pytorch-cuda=11.8
- pytorch>=2.3
- pytorch-gpu>=2.3=*cuda118*
- pytorch_geometric>=2.5,<2.6
- tensordict>=0.1.2
- torchdata
Expand Down
40 changes: 26 additions & 14 deletions python/cugraph-pyg/cugraph_pyg/nn/conv/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,30 @@
# See the License for the specific language governing permissions and
# limitations under the License.

from .gat_conv import GATConv
from .gatv2_conv import GATv2Conv
from .hetero_gat_conv import HeteroGATConv
from .rgcn_conv import RGCNConv
from .sage_conv import SAGEConv
from .transformer_conv import TransformerConv
import warnings

__all__ = [
"GATConv",
"GATv2Conv",
"HeteroGATConv",
"RGCNConv",
"SAGEConv",
"TransformerConv",
]
HAVE_CUGRAPH_OPS = False
try:
import pylibcugraphops
HAVE_CUGRAPH_OPS = True
except ImportError:
pass
except Exception as e:
warnings.warn(f"Unexpected error while importing pylibcugraphops: {e}")

if HAVE_CUGRAPH_OPS:
from .gat_conv import GATConv
from .gatv2_conv import GATv2Conv
from .hetero_gat_conv import HeteroGATConv
from .rgcn_conv import RGCNConv
from .sage_conv import SAGEConv
from .transformer_conv import TransformerConv

__all__ = [
"GATConv",
"GATv2Conv",
"HeteroGATConv",
"RGCNConv",
"SAGEConv",
"TransformerConv",
]
5 changes: 5 additions & 0 deletions python/cugraph-pyg/cugraph_pyg/tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,11 @@
gpubenchmark = pytest_benchmark.plugin.benchmark


def pytest_ignore_collect(collection_path, config):
"""Return True to prevent considering this path for collection."""
if "nn" in collection_path.name:
return True

@pytest.fixture(scope="module")
def dask_client():
dask_scheduler_file = os.environ.get("SCHEDULER_FILE")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@
from cugraph_pyg.loader import DaskNeighborLoader
from cugraph_pyg.loader import BulkSampleLoader
from cugraph_pyg.data import DaskGraphStore
from cugraph_pyg.nn import SAGEConv as CuGraphSAGEConv

from cugraph.gnn import FeatureStore
from cugraph.utilities.utils import import_optional, MissingModule
Expand Down Expand Up @@ -403,15 +402,15 @@ def test_cugraph_loader_e2e_csc(framework: str):
)

if framework == "pyg":
convs = [
torch_geometric.nn.SAGEConv(256, 64, aggr="mean").cuda(),
torch_geometric.nn.SAGEConv(64, 1, aggr="mean").cuda(),
]
SAGEConv = torch_geometric.nn.SAGEConv
else:
convs = [
CuGraphSAGEConv(256, 64, aggr="mean").cuda(),
CuGraphSAGEConv(64, 1, aggr="mean").cuda(),
]
pytest.skip("Skipping tests that requires cugraph-ops")
# SAGEConv = cugraph_pyg.nn.SAGEConv

convs = [
SAGEConv(256, 64, aggr="mean").cuda(),
SAGEConv(64, 1, aggr="mean").cuda(),
]

trim = trim_to_layer.TrimToLayer()
relu = torch.nn.functional.relu
Expand Down
1 change: 0 additions & 1 deletion python/cugraph-pyg/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ dependencies = [
"numba>=0.57",
"numpy>=1.23,<3.0a0",
"pandas",
"pylibcugraphops==25.2.*,>=0.0.0a0",
"torch-geometric>=2.5,<2.6",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.

Expand Down
Loading