Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding FAISS cpu to raft-ann-bench #1814

Merged
merged 44 commits into from
Oct 10, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
bded674
Adding FAISS cpu to raft-ann-bench
cjnolet Sep 11, 2023
f0e3c8f
Adding faiss cpu indexes and build
cjnolet Sep 12, 2023
f66fd21
Docs updates
cjnolet Sep 12, 2023
20b793a
Merge branch 'branch-23.10' into enh-ann-bench-faiss-cpu
cjnolet Sep 12, 2023
ad255fd
Merge branch 'branch-23.10' into enh-ann-bench-faiss-cpu
cjnolet Sep 12, 2023
6d7f390
Resetting build all gpu arch to 0
cjnolet Sep 12, 2023
491d090
Merge branch 'enh-ann-bench-faiss-cpu' of github.com:cjnolet/raft int…
cjnolet Sep 12, 2023
c9569a5
Doc updates
cjnolet Sep 12, 2023
28bee2b
More updates
cjnolet Sep 12, 2023
1e7ba4f
Cleaning up includes
cjnolet Sep 12, 2023
563b386
Explicitly adding spdlog and fmt
cjnolet Sep 13, 2023
9585f20
Using selectors for faiss
cjnolet Sep 13, 2023
87e3be0
Adding ability to link against faiss avx lib (only if arch supports it)
cjnolet Sep 13, 2023
74e6a5d
Removing some legacy get_faiss cmake bits
cjnolet Sep 13, 2023
fcd029f
Updating faiss cpu to override search params
cjnolet Sep 13, 2023
a56227e
Trying again.
cjnolet Sep 14, 2023
3fcd1e9
Making libfaiss installs either or
cjnolet Sep 14, 2023
1ec75ba
Using consistent naming for faiss algos
cjnolet Sep 25, 2023
a5585fa
Merge remote-tracking branch 'origin/branch-23.10' into enh-ann-bench…
cjnolet Sep 25, 2023
7d21375
Updating faiss version
cjnolet Sep 25, 2023
001c224
Pringing raft_faiss_targets
cjnolet Sep 25, 2023
c430bb8
Using faiss from pytorch
cjnolet Sep 25, 2023
30428fd
Building faiss statically each time. Will slow down CI but alleviate …
cjnolet Sep 28, 2023
b5606c1
Merge branch 'branch-23.10' into enh-ann-bench-faiss-cpu
cjnolet Sep 28, 2023
db2d210
Updates
cjnolet Sep 28, 2023
cb2eef8
Reverting
cjnolet Sep 28, 2023
375c38e
Using https for faiss github repo
cjnolet Sep 28, 2023
c4fb53c
Trying again
cjnolet Oct 2, 2023
f38031a
Merge branch 'branch-23.10' into enh-ann-bench-faiss-cpu
cjnolet Oct 2, 2023
8bb273c
Using corey's fork for now
cjnolet Oct 2, 2023
d539316
More updates
cjnolet Oct 2, 2023
fce179b
CHecking cudatoolkit library dir
cjnolet Oct 3, 2023
f54a757
iTerminating string
cjnolet Oct 3, 2023
385b4f4
Teach faiss about conda [hacky]
robertmaynard Oct 4, 2023
95c12db
Adding thread pool to overlap faiss queries
cjnolet Oct 4, 2023
7b67e89
Merge branch 'branch-23.12' into enh-ann-bench-faiss-cpu
cjnolet Oct 5, 2023
419d994
Merge branch 'branch-23.12' into enh-ann-bench-faiss-cpu
cjnolet Oct 5, 2023
1e7b5c8
Seeing if this fixes the devcontainers
cjnolet Oct 6, 2023
36d4dd3
Merge branch 'branch-23.12' into enh-ann-bench-faiss-cpu
cjnolet Oct 6, 2023
667b95c
Fixing dependencies.yml
cjnolet Oct 6, 2023
daffaf4
Adding openblas to nn_bench deps
cjnolet Oct 7, 2023
8638410
FIX add cpu targets to CUDA 12 faiss exception
dantegd Oct 10, 2023
bdc8d9a
FIX ivf_flat and pq cmake variable underscores
dantegd Oct 10, 2023
9d56f32
Fix conflicts with branch-23.12
dantegd Oct 10, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions conda/environments/bench_ann_cuda-118_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ dependencies:
- cudatoolkit
- cxx-compiler
- cython>=3.0.0
- faiss-proc=*=cuda
- gcc_linux-64=11.*
- glog>=0.6.0
- h5py>=3.8.0
Expand All @@ -30,12 +29,12 @@ dependencies:
- libcusolver=11.4.1.48
- libcusparse-dev=11.7.5.86
- libcusparse=11.7.5.86
- libfaiss>=1.7.1
- matplotlib
- nccl>=2.9.9
- ninja
- nlohmann_json>=3.11.2
- nvcc_linux-64=11.8
- openblas
- pandas
- pyyaml
- rmm==23.12.*
Expand Down
2 changes: 1 addition & 1 deletion conda/recipes/libraft/build_libraft.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env bash
# Copyright (c) 2022-2023, NVIDIA CORPORATION.

./build.sh libraft -v --allgpuarch --compile-lib --build-metrics=compile_lib --incl-cache-stats --no-nvtx
./build.sh libraft --allgpuarch --compile-lib --build-metrics=compile_lib --incl-cache-stats --no-nvtx
2 changes: 1 addition & 1 deletion conda/recipes/libraft/build_libraft_headers.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env bash
# Copyright (c) 2022-2023, NVIDIA CORPORATION.

./build.sh libraft -v --allgpuarch --no-nvtx
./build.sh libraft --allgpuarch --no-nvtx
2 changes: 1 addition & 1 deletion conda/recipes/libraft/build_libraft_template.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@
# Copyright (c) 2022-2023, NVIDIA CORPORATION.

# Just building template so we verify it uses libraft.so and fail if it doesn't build
./build.sh template -v
./build.sh template
2 changes: 1 addition & 1 deletion conda/recipes/libraft/build_libraft_tests.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#!/usr/bin/env bash
# Copyright (c) 2022-2023, NVIDIA CORPORATION.

./build.sh tests bench-prims -v --allgpuarch --no-nvtx --build-metrics=tests_bench_prims --incl-cache-stats
./build.sh tests bench-prims --allgpuarch --no-nvtx --build-metrics=tests_bench_prims --incl-cache-stats
cmake --install cpp/build --component testing
3 changes: 1 addition & 2 deletions conda/recipes/raft-ann-bench-cpu/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -60,8 +60,7 @@ requirements:
- pyyaml
- pandas
- benchmark

about:
home: https://rapids.ai/
license: Apache-2.0
summary: libraft ann bench
summary: RAFT ANN CPU benchmarks
2 changes: 1 addition & 1 deletion conda/recipes/raft-ann-bench/build.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#!/usr/bin/env bash
# Copyright (c) 2023, NVIDIA CORPORATION.

./build.sh bench-ann -v --allgpuarch --no-nvtx --build-metrics=bench_ann --incl-cache-stats
./build.sh bench-ann --allgpuarch --no-nvtx --build-metrics=bench_ann --incl-cache-stats
cmake --install cpp/build --component ann_bench
3 changes: 0 additions & 3 deletions conda/recipes/raft-ann-bench/conda_build_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,6 @@ gtest_version:
glog_version:
- ">=0.6.0"

faiss_version:
- ">=1.7.1"

h5py_version:
- ">=3.8.0"

Expand Down
10 changes: 0 additions & 10 deletions conda/recipes/raft-ann-bench/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -70,11 +70,6 @@ requirements:
{% endif %}
- glog {{ glog_version }}
- nlohmann_json {{ nlohmann_json_version }}
# Temporarily ignore faiss benchmarks on CUDA 12 because packages do not exist yet
{% if cuda_major == "11" %}
- faiss-proc=*=cuda
- libfaiss {{ faiss_version }}
{% endif %}
- h5py {{ h5py_version }}
- benchmark
- matplotlib
Expand All @@ -92,11 +87,6 @@ requirements:
- cudatoolkit
{% endif %}
- glog {{ glog_version }}
# Temporarily ignore faiss benchmarks on CUDA 12 because packages do not exist yet
{% if cuda_major == "11" %}
- faiss-proc=*=cuda
- libfaiss {{ faiss_version }}
{% endif %}
- h5py {{ h5py_version }}
- benchmark
- glog {{ glog_version }}
Expand Down
112 changes: 92 additions & 20 deletions cpp/bench/ann/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,18 @@
# ##################################################################################################
# * benchmark options ------------------------------------------------------------------------------

option(RAFT_ANN_BENCH_USE_FAISS_BFKNN "Include faiss' brute-force knn algorithm in benchmark" ON)
option(RAFT_ANN_BENCH_USE_FAISS_IVF_FLAT "Include faiss' ivf flat algorithm in benchmark" ON)
option(RAFT_ANN_BENCH_USE_FAISS_IVF_PQ "Include faiss' ivf pq algorithm in benchmark" ON)
option(RAFT_ANN_BENCH_USE_FAISS_GPU_FLAT "Include faiss' brute-force knn algorithm in benchmark" ON)
option(RAFT_ANN_BENCH_USE_FAISS_GPU_IVF_FLAT "Include faiss' ivf flat algorithm in benchmark" ON)
option(RAFT_ANN_BENCH_USE_FAISS_GPU_IVF_PQ "Include faiss' ivf pq algorithm in benchmark" ON)
cjnolet marked this conversation as resolved.
Show resolved Hide resolved
option(RAFT_ANN_BENCH_USE_FAISS_CPU_FLAT
"Include faiss' cpu brute-force knn algorithm in benchmark" ON
)
option(RAFT_ANN_BENCH_USE_FAISS_CPU_FLAT "Include faiss' cpu brute-force algorithm in benchmark" ON)

option(RAFT_ANN_BENCH_USE_FAISS_CPU_IVF_FLAT "Include faiss' cpu ivf flat algorithm in benchmark"
ON
)
option(RAFT_ANN_BENCH_USE_FAISS_CPU_IVF_PQ "Include faiss' cpu ivf pq algorithm in benchmark" ON)
option(RAFT_ANN_BENCH_USE_RAFT_IVF_FLAT "Include raft's ivf flat algorithm in benchmark" ON)
option(RAFT_ANN_BENCH_USE_RAFT_IVF_PQ "Include raft's ivf pq algorithm in benchmark" ON)
option(RAFT_ANN_BENCH_USE_RAFT_CAGRA "Include raft's CAGRA in benchmark" ON)
Expand All @@ -33,9 +42,15 @@ option(RAFT_ANN_BENCH_SINGLE_EXE
find_package(Threads REQUIRED)

if(BUILD_CPU_ONLY)
set(RAFT_ANN_BENCH_USE_FAISS_BFKNN OFF)
set(RAFT_ANN_BENCH_USE_FAISS_IVF_FLAT OFF)
set(RAFT_ANN_BENCH_USE_FAISS_IVF_PQ OFF)

# Include necessary logging dependencies
include(cmake/thirdparty/get_fmt.cmake)
include(cmake/thirdparty/get_spdlog.cmake)

set(RAFT_FAISS_ENABLE_GPU OFF)
set(RAFT_ANN_BENCH_USE_FAISS_GPU_FLAT OFF)
set(RAFT_ANN_BENCH_USE_FAISS_GPU_IVF_FLAT OFF)
set(RAFT_ANN_BENCH_USE_FAISS_GPU_IVF_PQ OFF)
set(RAFT_ANN_BENCH_USE_RAFT_IVF_FLAT OFF)
set(RAFT_ANN_BENCH_USE_RAFT_IVF_PQ OFF)
set(RAFT_ANN_BENCH_USE_RAFT_CAGRA OFF)
Expand All @@ -44,22 +59,33 @@ else()
# Disable faiss benchmarks on CUDA 12 since faiss is not yet CUDA 12-enabled.
# https://github.com/rapidsai/raft/issues/1627
if(${CMAKE_CUDA_COMPILER_VERSION} VERSION_GREATER_EQUAL 12.0.0)
set(RAFT_ANN_BENCH_USE_FAISS_BFKNN OFF)
set(RAFT_ANN_BENCH_USE_FAISS_IVF_FLAT OFF)
set(RAFT_ANN_BENCH_USE_FAISS_IVF_PQ OFF)
set(RAFT_FAISS_ENABLE_GPU OFF)
set(RAFT_ANN_BENCH_USE_FAISS_GPU_FLAT OFF)
set(RAFT_ANN_BENCH_USE_FAISS_GPU_IVF_FLAT OFF)
set(RAFT_ANN_BENCH_USE_FAISS_GPU_IVF_PQ OFF)
set(RAFT_ANN_BENCH_USE_FAISS_CPU_FLAT OFF)
set(RAFT_ANN_BENCH_USE_FAISS_CPU_IVF_PQ OFF)
set(RAFT_ANN_BENCH_USE_FAISS_CPU_IVF_FLAT OFF)
else()
set(RAFT_FAISS_ENABLE_GPU ON)
endif()
endif()

set(RAFT_ANN_BENCH_USE_FAISS OFF)
if(RAFT_ANN_BENCH_USE_FAISS_BFKNN
OR RAFT_ANN_BENCH_USE_FAISS_IVFPQ
OR RAFT_ANN_BENCH_USE_FAISS_IFFLAT
if(RAFT_ANN_BENCH_USE_FAISS_GPU_FLAT
OR RAFT_ANN_BENCH_USE_FAISS_GPU_IVF_PQ
OR RAFT_ANN_BENCH_USE_FAISS_GPU_IVF_FLAT
OR RAFT_ANN_BENCH_USE_FAISS_CPU_FLAT
OR RAFT_ANN_BENCH_USE_FAISS_CPU_IVF_PQ
OR RAFT_ANN_BENCH_USE_FAISS_CPU_IVF_FLAT
)
set(RAFT_ANN_BENCH_USE_FAISS ON)
set(RAFT_USE_FAISS_STATIC ON)
endif()

set(RAFT_ANN_BENCH_USE_RAFT OFF)
if(RAFT_ANN_BENCH_USE_RAFT_IVF_PQ
OR RAFT_ANN_BENCH_USE_RAFT_BRUTE_FORCE
OR RAFT_ANN_BENCH_USE_RAFT_IVF_FLAT
OR RAFT_ANN_BENCH_USE_RAFT_CAGRA
)
Expand All @@ -80,6 +106,12 @@ if(RAFT_ANN_BENCH_USE_GGNN)
endif()

if(RAFT_ANN_BENCH_USE_FAISS)
# We need to ensure that faiss has all the conda
# information. So we currently use the very ugly
# hammer of `link_libraries` to ensure that all
# targets in this directory and the faiss directory
# will have the conda includes/link dirs
link_libraries($<TARGET_NAME_IF_EXISTS:conda_env>)
include(cmake/thirdparty/get_faiss.cmake)
endif()

Expand Down Expand Up @@ -116,14 +148,15 @@ function(ConfigureAnnBench)
${BENCH_NAME}
PRIVATE raft::raft
nlohmann_json::nlohmann_json
$<$<BOOL:${GPU_BUILD}>:$<$<BOOL:${RAFT_ANN_BENCH_USE_MULTIGPU}>:NCCL::NCCL>>
${ConfigureAnnBench_LINKS}
Threads::Threads
$<$<BOOL:${GPU_BUILD}>:${RAFT_CTK_MATH_DEPENDENCIES}>
$<TARGET_NAME_IF_EXISTS:OpenMP::OpenMP_CXX>
$<TARGET_NAME_IF_EXISTS:conda_env>
-static-libgcc
-static-libstdc++
$<$<BOOL:${BUILD_CPU_ONLY}>:fmt::fmt-header-only>
$<$<BOOL:${BUILD_CPU_ONLY}>:spdlog::spdlog_header_only>
)

set_target_properties(
Expand Down Expand Up @@ -201,6 +234,12 @@ if(RAFT_ANN_BENCH_USE_RAFT_IVF_FLAT)
)
endif()

if(RAFT_ANN_BENCH_USE_RAFT_BRUTE_FORCE)
ConfigureAnnBench(
NAME RAFT_BRUTE_FORCE PATH bench/ann/src/raft/raft_benchmark.cu LINKS raft::compiled
)
endif()

if(RAFT_ANN_BENCH_USE_RAFT_CAGRA)
ConfigureAnnBench(
NAME
Expand All @@ -213,20 +252,52 @@ if(RAFT_ANN_BENCH_USE_RAFT_CAGRA)
)
endif()

if(RAFT_ANN_BENCH_USE_FAISS_IVF_FLAT)
set(RAFT_FAISS_TARGETS faiss::faiss)
if(TARGET faiss::faiss_avx2)
set(RAFT_FAISS_TARGETS faiss::faiss_avx2)
endif()

message("RAFT_FAISS_TARGETS: ${RAFT_FAISS_TARGETS}")
message("CUDAToolkit_LIBRARY_DIR: ${CUDAToolkit_LIBRARY_DIR}")
if(RAFT_ANN_BENCH_USE_FAISS_CPU_FLAT)
ConfigureAnnBench(
NAME FAISS_CPU_FLAT PATH bench/ann/src/faiss/faiss_cpu_benchmark.cpp LINKS
${RAFT_FAISS_TARGETS}
)
endif()

if(RAFT_ANN_BENCH_USE_FAISS_CPU_IVF_FLAT)
ConfigureAnnBench(
NAME FAISS_CPU_IVF_FLAT PATH bench/ann/src/faiss/faiss_cpu_benchmark.cpp LINKS
${RAFT_FAISS_TARGETS}
)
endif()

if(RAFT_ANN_BENCH_USE_FAISS_CPU_IVF_PQ)
ConfigureAnnBench(
NAME FAISS_IVF_FLAT PATH bench/ann/src/faiss/faiss_benchmark.cu LINKS faiss::faiss
NAME FAISS_CPU_IVF_PQ PATH bench/ann/src/faiss/faiss_cpu_benchmark.cpp LINKS
${RAFT_FAISS_TARGETS}
)
endif()

if(RAFT_ANN_BENCH_USE_FAISS_IVF_PQ)
if(RAFT_ANN_BENCH_USE_FAISS_GPU_IVF_FLAT)
ConfigureAnnBench(
NAME FAISS_IVF_PQ PATH bench/ann/src/faiss/faiss_benchmark.cu LINKS faiss::faiss
NAME FAISS_GPU_IVF_FLAT PATH bench/ann/src/faiss/faiss_gpu_benchmark.cu LINKS
${RAFT_FAISS_TARGETS}
)
endif()

if(RAFT_ANN_BENCH_USE_FAISS_BFKNN)
ConfigureAnnBench(NAME FAISS_BFKNN PATH bench/ann/src/faiss/faiss_benchmark.cu LINKS faiss::faiss)
if(RAFT_ANN_BENCH_USE_FAISS_GPU_IVF_PQ)
ConfigureAnnBench(
NAME FAISS_GPU_IVF_PQ PATH bench/ann/src/faiss/faiss_gpu_benchmark.cu LINKS
${RAFT_FAISS_TARGETS}
)
endif()

if(RAFT_ANN_BENCH_USE_FAISS_GPU_FLAT)
ConfigureAnnBench(
NAME FAISS_GPU_FLAT PATH bench/ann/src/faiss/faiss_gpu_benchmark.cu LINKS ${RAFT_FAISS_TARGETS}
)
endif()

if(RAFT_ANN_BENCH_USE_GGNN)
Expand Down Expand Up @@ -277,7 +348,8 @@ if(RAFT_ANN_BENCH_SINGLE_EXE)
target_compile_definitions(
ANN_BENCH
PRIVATE
$<$<BOOL:${CUDAToolkit_FOUND}>:ANN_BENCH_LINK_CUDART="libcudart.so.${CUDAToolkit_VERSION_MAJOR}.${CUDAToolkit_VERSION_MINOR}.${CUDAToolkit_VERSION_PATCH}">
$<$<BOOL:${CUDAToolkit_FOUND}>:ANN_BENCH_LINK_CUDART="libcudart.so.${CUDAToolkit_VERSION_MAJOR}.${CUDAToolkit_VERSION_MINOR}.${CUDAToolkit_VERSION_PATCH}
">
$<$<BOOL:${NVTX3_HEADERS_FOUND}>:ANN_BENCH_NVTX3_HEADERS_FOUND>
)

Expand Down
Loading