Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added OPTICS clustering algorithm #295

Merged
merged 8 commits into from
Sep 12, 2024
Merged

Added OPTICS clustering algorithm #295

merged 8 commits into from
Sep 12, 2024

Conversation

norm4nn
Copy link
Contributor

@norm4nn norm4nn commented Aug 9, 2024

Added:

  • OPTICS clustering algorithm
  • Its description
  • doctests

Currently, the only available clustering method is dbscan, but I plan to add the xi method soon.

@msluszniak
Copy link
Contributor

Thank you for the PR! I dropped some small comments for now. I'll try to make a more careful pass soon ;)


module when is_atom(module) ->
module
end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remember to run mix format :)

Copy link
Member

@krstopro krstopro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments from my side. There are several more potential problems:

  • There is no struct within the module. This seems a bit strange.
  • Metric might need to be passed as an option as well.
  • Don't forget to write unit-tests.

None of them seem serious and other than that it looks fine. I might have a more detailed look later this week.

processed = Nx.broadcast(0, {n_samples})

{_order_idx, core_distances, reachability, predecessor, _processed, ordering, _x, _max_eps} =
while {order_idx = 0, core_distances, reachability, predecessor, processed, ordering, x,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be vectorized.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you provide a more specific to hint on how to vectorize? Or maybe you can do a separate pass later and vectorize it? Given they are getting acquainted with the codebase + Nx, it may be a bit too complex to pull off. :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I didn't think much about it, nor am I insisting on vectorization for this pull request. I just mentioned it as something to think of. Should have written that as well. 😅

That said, the fact that the condition in the loop is order_idx < num_samples and order_idx is incremented in every iteration kinda indicates that this could be vectorized. I need to have a closer look.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're absolutely right about the struct—it should be included to maintain consistency.

Also, I get a headache just thinking about vectorizing this entire section of code . Implementing it in its current state was already pretty complex.

Copy link
Member

@krstopro krstopro Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very well, you can leave vectorization to someone else :)

max_eps = opts[:max_eps]
n_features = Nx.axis_size(x, 1)
n_samples = Nx.axis_size(x, 0)
nbrs = Scholar.Neighbors.BruteKNN.fit(x, num_neighbors: n_samples)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are using Scholar.Neighbors.BruteKNN here even though the k-NN algorithm can be passed as an option to fit/2. Is this intended?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not intended, I missed that. I will repair in next commit.

defnp compute_optics_graph(x, distances, opts \\ []) do
max_eps = opts[:max_eps]
n_samples = Nx.axis_size(x, 0)
reachability = Nx.broadcast(Nx.Constants.max_finite({:f, 32}), {n_samples})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any specific reason to use Nx.Constants.max_finite/1 instead of Nx.Constants.infinity/1? Also, I don't think it's a good idea to hardcode the types, in this case :f32. Better use Nx.type(x).

Copy link
Contributor Author

@norm4nn norm4nn Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a reason. In line 168:
point = Nx.argmin(Nx.select(unprocessed_mask, reachability, Nx.Constants.infinity()))
I want to ensure that a point already processed won’t be selected again as the next point to process. At the same time, the reachability tensor should initially hold the highest possible values (preferably infinity) because I’m searching for the smallest element in each step of the while loop.

That said, I'm not particularly proud of this approach and would appreciate any suggestions for improvement.

"""
],
max_eps: [
default: Nx.Constants.infinity(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't set the default value here, rather in fit/2. I think it is better checking if opts[:max_eps] is nil and then setting it to Nx.Constanst.infinity(Nx.type(x)).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can be problematic, in edge case where x tensor of type {:s, 64} the following code will fail:

x = Nx.tensor([[1, 2], [2, 5], [3, 6], [8, 7], [8, 8], [7, 3]])
Nx.Constants.infinity(Nx.type(x))

with error

** (FunctionClauseError) no function clause matching in Nx.Type.infinity_binary/1    
    
    The following arguments were given to Nx.Type.infinity_binary/1:
    
        # 1
        {:s, 64}
    
    Attempted function clauses (showing 4 out of 4):
    
        def infinity_binary({:bf, 16})
        def infinity_binary({:f, 16})
        def infinity_binary({:f, 32})
        def infinity_binary({:f, 64})
    
    (nx 0.7.3) lib/nx/type.ex:120: Nx.Type.infinity_binary/1
    (nx 0.7.3) lib/nx/constants.ex:96: Nx.Constants.infinity/2
    #cell:y6u577gyyh6jwkmd:2: (file)

so it seems that it will only work if x is float type tensor.

Similar problem with hardcoded float type reachability:
reachability = Nx.broadcast(Nx.Constants.max_finite({:f, 32}), {n_samples})
I can't set its type to Nx.type(x) like this:
reachability = Nx.broadcast(Nx.Constants.max_finite(Nx.type(x), {n_samples})
because it will break each time when Nx.type(x) returns something different than float type - any of {:bf, 16}, {:f, 16}, {:f, 32}, {:f, 64}. reachability tensor should always be float type.

Copy link
Member

@krstopro krstopro Aug 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use Scholar.Shared.to_float_type/1. You should convert x to a floating-point tensor using Scholar.Shared.to_float/1 anyway.

"""
],
eps: [
default: Nx.Constants.infinity(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

>
"""

deftransform fit(x, opts \\ []) do
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So please make this one a defn and then call a deftransform that will split the options and invoke the algorithm. The important point is that you need to pass through at least one defn before you do any work, then everything from there onwards is jitted. :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will do that

Copy link
Member

@krstopro krstopro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more comment.

I will try to have a detailed look one of these days.

"""
end

algorithm_opts = Keyword.put(algorithm_opts, :num_neighbors, min_samples)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this line is needed anymore.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It turns out that it is needed, in the same function, few lines below I call function .fit() from algorithm_module , which requires :num_neighbors. However the value I set here doesn't have any meaning so it can be any positive integer like 1 for example. It wouldn't be required if I could pass just algorithm_module to module struct, but it doesn't seem to be possible.

@josevalim josevalim closed this Sep 12, 2024
@josevalim josevalim reopened this Sep 12, 2024
@josevalim josevalim merged commit cea4657 into elixir-nx:main Sep 12, 2024
2 of 4 checks passed
@josevalim
Copy link
Contributor

💚 💙 💜 💛 ❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants