You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your awesome work! I tried to implement PANTHER on around 3000 WSIs. I successfully generated a prototype and embedding for c=10 and c=16, and ran the demo code for visualization. However, when I calculate the "mus", they're almost all equal for every WSI (see below). Any thoughts on why this is happening?
I also noticed that the imbalance value during training is consistently equal to k (16), which I believe is incorrect.
Thank you!
The text was updated successfully, but these errors were encountered:
I am not sure why this could be happening - Have you checked each prototype to see if they are sufficiently different?
One more solution could be to normalize all embeddings so that L2 norm is 1 and then trying the procedure again to see if it solves the issue.
Hi, thanks for the response. I'm using UNI–I tried redownloading the model from huggingface and generated the embeddings from scratch again, but ran into the same issue. Here's a screenshot of the output for more context.
I believe this has something to do with the faiss implementation of kmeans. When I switch the mode parameter from faiss to kmeans to use the sklearn CPU version, the visualizations and example patches from each cluster make sense morphologically. However, since sklearn doesn't output the same metrics per iteration, I'm not sure if those metric values are expected. For now, I've resorted to using sklearn until it gets sorted out.
Hi, thanks for your awesome work! I tried to implement PANTHER on around 3000 WSIs. I successfully generated a prototype and embedding for c=10 and c=16, and ran the demo code for visualization. However, when I calculate the "mus", they're almost all equal for every WSI (see below). Any thoughts on why this is happening?
I also noticed that the imbalance value during training is consistently equal to k (16), which I believe is incorrect.
Thank you!
The text was updated successfully, but these errors were encountered: