Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce memory usage of as_categorical_column #14138

Merged

Conversation

wence-
Copy link
Contributor

@wence- wence- commented Sep 20, 2023

Description

The main culprit is in the way the codes returned from _label_encoding were being ordered. We were generating an int64 column for the order, gathering through the left gather map, and then argsorting, before using that ordering as a gather map for the codes.

We note that gather(y, with=argsort(x)) is equivalent to sort_by_key(y, with=x) so use that instead (avoiding an unnecessary gather). Furthermore we also note that gather([0..n), with=x) is just equivalent to x, so we can avoid a gather too.

This reduces the peak memory footprint of categorifying a random column of 500_000_000 int32 values where there are 100 unique values from 24.75 GiB to 11.67 GiB.

Test code

import cudf
import cupy as cp

K = 100
N = 500_000_000
rng = cp.random._generator.RandomState()
column = cudf.core.column.as_column(rng.choice(cp.arange(K, dtype="int32"), size=(N,), replace=True))
column = column.astype("category", ordered=False)

Before

Screenshot from 2023-09-20 14-49-27

After

Screenshot from 2023-09-20 14-49-42

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

The main culprit is in the way the codes returned from _label_encoding
were being ordered. We were generating an int64 column for the order,
gathering through the left gather map, and then argsorting, before
using that ordering as a gather map for the codes.

We note that gather(y, with=argsort(x)) is equivalent to
sort_by_key(y, with=x) so use that instead (avoiding an unnecessary
gather). Furthermore we also note that gather([0..n), with=x) is just
equivalent to x, so we can avoid a gather too.

This reduces the peak memory footprint of categorifying a random
column of 500_000_000 int32 values where there are 100 unique values
from 24.75 GiB to 11.67 GiB.
@wence- wence- requested a review from a team as a code owner September 20, 2023 14:00
@github-actions github-actions bot added the Python Affects Python cuDF API. label Sep 20, 2023
@wence- wence- added Performance Performance related issue improvement Improvement / enhancement to an existing function non-breaking Non-breaking change labels Sep 20, 2023
Copy link
Contributor

@bdice bdice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! Note that this is an example of the performance antipattern discussed in #13557.

@wence-
Copy link
Contributor Author

wence- commented Sep 20, 2023

/merge

@rapids-bot rapids-bot bot merged commit e87d2fc into rapidsai:branch-23.10 Sep 20, 2023
58 checks passed
@wence- wence- deleted the wence/fix/categorical-mem-usage branch September 20, 2023 20:18
@harrism
Copy link
Member

harrism commented Sep 20, 2023

Is performance affected?

@wence-
Copy link
Contributor Author

wence- commented Sep 21, 2023

Is performance affected?

Yes, but positively, I run:

import time
import cupy as cp
import cudf
import rmm

rmm.reinitialize(pool_allocator=True)

rng = cp.random._generator.RandomState(seed=108)
for K in [2**4, 2**10, 2**12, 2**14, 2**16]:
    for N in [1_000_000, 10_000_000, 100_000_000, 250_000_000]:
        col = cudf.core.column.as_column(rng.choice(cp.arange(K, dtype="uint32"), size=N, replace=True))
        start = time.time()
        for _ in range((reps := 1_000_000_000 // N)):
            y = col.astype("category", ordered=False)
            del y
        end = time.time()
        del col

performance-improvement

Across column sizes and number of unique values, the new code is between 25 and 30% faster.

@harrism
Copy link
Member

harrism commented Sep 21, 2023

Excellent!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improvement / enhancement to an existing function non-breaking Non-breaking change Performance Performance related issue Python Affects Python cuDF API.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants