Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added OPTICS clustering algorithm #295

Merged
merged 8 commits into from
Sep 12, 2024
259 changes: 259 additions & 0 deletions lib/scholar/cluster/optics.ex
Original file line number Diff line number Diff line change
@@ -0,0 +1,259 @@
defmodule Scholar.Cluster.OPTICS do
@moduledoc """
OPTICS (Ordering Points To Identify the Clustering Structure) is an algorithm
for finding density-based clusters in spatial data.

It is closely related to DBSCAN, finds core sample of high density and expands
clusters from them. Unlike DBSCAN, keeps cluster hierarchy for a variable
neighborhood radius. Clusters are then extracted using a DBSCAN-like
method.
"""
import Nx.Defn
require Nx

opts = [
min_samples: [
default: 5,
type: :pos_integer,
doc: """
The number of samples in a neighborhood for a point to be considered as a core point.
"""
],
max_eps: [
default: Nx.Constants.infinity(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't set the default value here, rather in fit/2. I think it is better checking if opts[:max_eps] is nil and then setting it to Nx.Constanst.infinity(Nx.type(x)).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can be problematic, in edge case where x tensor of type {:s, 64} the following code will fail:

x = Nx.tensor([[1, 2], [2, 5], [3, 6], [8, 7], [8, 8], [7, 3]])
Nx.Constants.infinity(Nx.type(x))

with error

** (FunctionClauseError) no function clause matching in Nx.Type.infinity_binary/1    
    
    The following arguments were given to Nx.Type.infinity_binary/1:
    
        # 1
        {:s, 64}
    
    Attempted function clauses (showing 4 out of 4):
    
        def infinity_binary({:bf, 16})
        def infinity_binary({:f, 16})
        def infinity_binary({:f, 32})
        def infinity_binary({:f, 64})
    
    (nx 0.7.3) lib/nx/type.ex:120: Nx.Type.infinity_binary/1
    (nx 0.7.3) lib/nx/constants.ex:96: Nx.Constants.infinity/2
    #cell:y6u577gyyh6jwkmd:2: (file)

so it seems that it will only work if x is float type tensor.

Similar problem with hardcoded float type reachability:
reachability = Nx.broadcast(Nx.Constants.max_finite({:f, 32}), {n_samples})
I can't set its type to Nx.type(x) like this:
reachability = Nx.broadcast(Nx.Constants.max_finite(Nx.type(x), {n_samples})
because it will break each time when Nx.type(x) returns something different than float type - any of {:bf, 16}, {:f, 16}, {:f, 32}, {:f, 64}. reachability tensor should always be float type.

Copy link
Member

@krstopro krstopro Aug 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use Scholar.Shared.to_float_type/1. You should convert x to a floating-point tensor using Scholar.Shared.to_float/1 anyway.

type: {:custom, Scholar.Options, :beta, []},
doc: """
The maximum distance between two samples for one to be considered as in the neighborhood of the other.
Default value of Nx.Constants.infinity() will identify clusters across all scales.
"""
],
eps: [
default: Nx.Constants.infinity(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

type: {:custom, Scholar.Options, :beta, []},
doc: """
The maximum distance between two samples for one to be considered as in the neighborhood of the other.
By default it assumes the same value as max_eps.
"""
],
algorithm: [
default: :brute,
type: :atom,
doc: """
Algorithm used to compute the k-nearest neighbors. Possible values:

* `:brute` - Brute-force search. See `Scholar.Neighbors.BruteKNN` for more details.

* `:kd_tree` - k-d tree. See `Scholar.Neighbors.KDTree` for more details.

* `:random_projection_forest` - Random projection forest. See `Scholar.Neighbors.RandomProjectionForest` for more details.

* Module implementing `fit(data, opts)` and `predict(model, query)`. predict/2 must return a tuple containing indices
of k-nearest neighbors of query points as well as distances between query points and their k-nearest neighbors.
Also has to take num_neighbors as argument.
"""
]
]

@opts_schema NimbleOptions.new!(opts)

@doc """
Perform OPTICS clustering for `x` which is tensor of `{n_samples, n_features} shape.

## Options

#{NimbleOptions.docs(@opts_schema)}

## Return Values

The function returns a labels tensor of shape `{n_samples}`.
Cluster labels for each point in the dataset given to fit().
Noisy samples are labeled as -1.

## Examples

iex> x = Nx.tensor([[1, 2], [2, 5], [3, 6], [8, 7], [8, 8], [7, 3]])
iex> Scholar.Cluster.OPTICS.fit(x, min_samples: 2)
#Nx.Tensor<
s64[6]
[-1, -1, -1, -1, -1, -1]
>
iex> Scholar.Cluster.OPTICS.fit(x, eps: 4.5, min_samples: 2)
#Nx.Tensor<
s64[6]
[0, 0, 0, 1, 1, 1]
>
iex> Scholar.Cluster.OPTICS.fit(x, eps: 2, min_samples: 2)
#Nx.Tensor<
s64[6]
[-1, 0, 0, 1, 1, -1]
>
iex> Scholar.Cluster.OPTICS.fit(x, eps: 1, min_samples: 2)
#Nx.Tensor<
s64[6]
[-1, -1, -1, 0, 0, -1]
>
iex> Scholar.Cluster.OPTICS.fit(x, eps: 4.5, min_samples: 3)
#Nx.Tensor<
s64[6]
[0, 0, 0, 1, 1, -1]
>
iex> Scholar.Cluster.OPTICS.fit(x, max_eps: 2, min_samples: 1, algorithm: :kd_tree, metric: {:minkowski, 1})
#Nx.Tensor<
s64[6]
[0, 1, 1, 2, 2, 3]
>
"""

deftransform fit(x, opts \\ []) do
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So please make this one a defn and then call a deftransform that will split the options and invoke the algorithm. The important point is that you need to pass through at least one defn before you do any work, then everything from there onwards is jitted. :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will do that

if Nx.rank(x) != 2 do
raise ArgumentError,
"""
expected x to have shape {num_samples, num_features}, \
got tensor with shape: #{inspect(Nx.shape(x))}
"""
end
{opts, algorithm_opts} = Keyword.split(opts, [:min_samples, :max_eps, :eps, :algorithm])
opts = NimbleOptions.validate!(opts, @opts_schema)
algorithm_opts = Keyword.put(algorithm_opts, :num_neighbors, opts[:min_samples])

algorithm_module =
case opts[:algorithm] do
:brute ->
Scholar.Neighbors.BruteKNN

:kd_tree ->
Scholar.Neighbors.KDTree

:random_projection_forest ->
Scholar.Neighbors.RandomProjectionForest

module when is_atom(module) ->
module
end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remember to run mix format :)

model = algorithm_module.fit(x, algorithm_opts)
{_neighbors, distances} = algorithm_module.predict(model, x)
josevalim marked this conversation as resolved.
Show resolved Hide resolved
fit_p(x, distances, opts)
end

defnp fit_p(x, core_distances, opts \\ []) do
{core_distances, reachability, _predecessor, ordering} = compute_optics_graph(x, core_distances, opts)

eps =
if opts[:eps] == Nx.Constants.infinity() do
opts[:max_eps]
else
opts[:eps]
end

cluster_optics_dbscan(reachability, core_distances, ordering, eps: eps)
end

defnp compute_optics_graph(x, distances, opts \\ []) do
max_eps = opts[:max_eps]
n_samples = Nx.axis_size(x, 0)
reachability = Nx.broadcast(Nx.Constants.max_finite({:f, 32}), {n_samples})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any specific reason to use Nx.Constants.max_finite/1 instead of Nx.Constants.infinity/1? Also, I don't think it's a good idea to hardcode the types, in this case :f32. Better use Nx.type(x).

Copy link
Contributor Author

@norm4nn norm4nn Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a reason. In line 168:
point = Nx.argmin(Nx.select(unprocessed_mask, reachability, Nx.Constants.infinity()))
I want to ensure that a point already processed won’t be selected again as the next point to process. At the same time, the reachability tensor should initially hold the highest possible values (preferably infinity) because I’m searching for the smallest element in each step of the while loop.

That said, I'm not particularly proud of this approach and would appreciate any suggestions for improvement.

predecessor = Nx.broadcast(-1, {n_samples})
core_distances = Nx.slice_along_axis(distances, opts[:min_samples] - 1, 1, axis: 1)
core_distances =
Nx.select(core_distances > max_eps, Nx.Constants.infinity(), core_distances)

ordering = Nx.broadcast(0, {n_samples})
processed = Nx.broadcast(0, {n_samples})

{_order_idx, core_distances, reachability, predecessor, _processed, ordering, _x, _max_eps} =
while {order_idx = 0, core_distances, reachability, predecessor, processed, ordering, x,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be vectorized.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you provide a more specific to hint on how to vectorize? Or maybe you can do a separate pass later and vectorize it? Given they are getting acquainted with the codebase + Nx, it may be a bit too complex to pull off. :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I didn't think much about it, nor am I insisting on vectorization for this pull request. I just mentioned it as something to think of. Should have written that as well. 😅

That said, the fact that the condition in the loop is order_idx < num_samples and order_idx is incremented in every iteration kinda indicates that this could be vectorized. I need to have a closer look.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're absolutely right about the struct—it should be included to maintain consistency.

Also, I get a headache just thinking about vectorizing this entire section of code . Implementing it in its current state was already pretty complex.

Copy link
Member

@krstopro krstopro Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very well, you can leave vectorization to someone else :)

max_eps},
order_idx < n_samples do
unprocessed_mask = processed == 0
point = Nx.argmin(Nx.select(unprocessed_mask, reachability, Nx.Constants.infinity()))
processed = Nx.put_slice(processed, [point], Nx.new_axis(1, 0))
ordering = Nx.put_slice(ordering, [order_idx], Nx.new_axis(point, 0))

{reachability, predecessor} =
set_reach_dist(core_distances, reachability, predecessor, point, processed, x,
max_eps: max_eps
)

{order_idx + 1, core_distances, reachability, predecessor, processed, ordering, x,
max_eps}
end

reachability =
Nx.select(
reachability == Nx.Constants.max_finite({:f, 32}),
Nx.Constants.infinity(),
reachability
)

{core_distances, reachability, predecessor, ordering}
end

defnp set_reach_dist(
core_distances,
reachability,
predecessor,
point_index,
processed,
x,
opts \\ []
) do
max_eps = opts[:max_eps]
n_features = Nx.axis_size(x, 1)
n_samples = Nx.axis_size(x, 0)
nbrs = Scholar.Neighbors.BruteKNN.fit(x, num_neighbors: n_samples)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are using Scholar.Neighbors.BruteKNN here even though the k-NN algorithm can be passed as an option to fit/2. Is this intended?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not intended, I missed that. I will repair in next commit.

t = Nx.take(x, point_index, axis: 0)
p = Nx.broadcast(t, {1, n_features})
{neighbors, distances} = Scholar.Neighbors.BruteKNN.predict(nbrs, p)

neighbors = Nx.flatten(neighbors)
distances = Nx.flatten(distances)
indices_ngbrs = Nx.argsort(neighbors)
neighbors = Nx.take(neighbors, indices_ngbrs)
distances = Nx.take(distances, indices_ngbrs)
are_neighbors_processed = Nx.take(processed, neighbors)

filtered_neighbors =
Nx.select(
are_neighbors_processed or distances > max_eps,
-1 * neighbors,
neighbors
)

dists = Nx.flatten(Scholar.Metrics.Distance.pairwise_minkowski(p, x))
core_distance = Nx.take(core_distances, point_index)
rdists = Nx.max(dists, core_distance)
improved = rdists < reachability
improved = Nx.select(improved, filtered_neighbors, -1)

improved =
Nx.select(
improved == -1 and filtered_neighbors > 0,
Nx.multiply(filtered_neighbors, -1),
filtered_neighbors
)

rdists = Nx.select(improved >= 0, rdists, 0)
reversed_improved = Nx.max(Nx.multiply(improved, -1), 0)
krstopro marked this conversation as resolved.
Show resolved Hide resolved

reachability =
Nx.select(improved <= 0, Nx.take(reachability, reversed_improved), rdists)

predecessor =
Nx.select(improved <= 0, Nx.take(predecessor, reversed_improved), point_index)

{reachability, predecessor}
end

defnp cluster_optics_dbscan(reachability, core_distances, ordering, opts \\ []) do
eps = opts[:eps]
far_reach = Nx.flatten(reachability > eps)
near_core = Nx.flatten(core_distances <= eps)
far_and_not_near = Nx.multiply(far_reach, 1 - near_core)
far_reach = Nx.take(far_reach, ordering)
near_core = Nx.take(near_core, ordering)
far_and_near = far_reach * near_core
labels = Nx.as_type(Nx.cumulative_sum(far_and_near), :s8) - 1
labels = Nx.take(labels, Nx.argsort(ordering))
Nx.select(far_and_not_near, -1, labels)
end
end
Loading