Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added OPTICS clustering algorithm #295
Added OPTICS clustering algorithm #295
Changes from 2 commits
b55b102
35e89e6
43f8831
5c7812e
a3c30b8
78e37bc
990ba96
f1e863a
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't set the default value here, rather in
fit/2
. I think it is better checking ifopts[:max_eps]
isnil
and then setting it toNx.Constanst.infinity(Nx.type(x))
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can be problematic, in edge case where
x
tensor of type {:s, 64} the following code will fail:with error
so it seems that it will only work if
x
is float type tensor.Similar problem with hardcoded float type
reachability
:reachability = Nx.broadcast(Nx.Constants.max_finite({:f, 32}), {n_samples})
I can't set its type to
Nx.type(x)
like this:reachability = Nx.broadcast(Nx.Constants.max_finite(Nx.type(x), {n_samples})
because it will break each time when
Nx.type(x)
returns something different than float type - any of{:bf, 16}
,{:f, 16}
,{:f, 32}
,{:f, 64}
.reachability
tensor should always be float type.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use
Scholar.Shared.to_float_type/1
. You should convertx
to a floating-point tensor usingScholar.Shared.to_float/1
anyway.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So please make this one a
defn
and then call adeftransform
that will split the options and invoke the algorithm. The important point is that you need to pass through at least onedefn
before you do any work, then everything from there onwards is jitted. :)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I will do that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any specific reason to use
Nx.Constants.max_finite/1
instead ofNx.Constants.infinity/1
? Also, I don't think it's a good idea to hardcode the types, in this case:f32
. Better useNx.type(x)
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a reason. In line 168:
point = Nx.argmin(Nx.select(unprocessed_mask, reachability, Nx.Constants.infinity()))
I want to ensure that a point already processed won’t be selected again as the next point to process. At the same time, the reachability tensor should initially hold the highest possible values (preferably infinity) because I’m searching for the smallest element in each step of the while loop.
That said, I'm not particularly proud of this approach and would appreciate any suggestions for improvement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can be vectorized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you provide a more specific to hint on how to vectorize? Or maybe you can do a separate pass later and vectorize it? Given they are getting acquainted with the codebase + Nx, it may be a bit too complex to pull off. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest, I didn't think much about it, nor am I insisting on vectorization for this pull request. I just mentioned it as something to think of. Should have written that as well. 😅
That said, the fact that the condition in the loop is
order_idx < num_samples
andorder_idx
is incremented in every iteration kinda indicates that this could be vectorized. I need to have a closer look.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're absolutely right about the struct—it should be included to maintain consistency.
Also, I get a headache just thinking about vectorizing this entire section of code . Implementing it in its current state was already pretty complex.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very well, you can leave vectorization to someone else :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are using
Scholar.Neighbors.BruteKNN
here even though the k-NN algorithm can be passed as an option tofit/2
. Is this intended?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not intended, I missed that. I will repair in next commit.