You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I see a model that can do this reliably, I will try to implement it. If someone has a suggestion please let me know.
This was my previous response to a related request but it might be outdated or incorrect now:
It is not currently possible with these nodes.
The model used by the audio separation node is Hybrid Demucs [Défossez, 2021] trained on both training and test sets of MUSDB-HQ [Rafii et al., 2019]. The model separates sounds into the categories of Bass, Drums, Other, and Vocals, which is standard (see Models Comparison).
The models that aim to separate arbitrary sounds (as opposed to predefined categories) are less consistent and come with many trade-offs. For that reason, I don't think it would be worth implementing any of them yet. Maybe I am wrong about that.
In any case, if you really need to separate piano, I'd suggest searching for query-based sound separation models like https://github.com/Audio-AGI/AudioSep
extraction of guitar tracks only
The text was updated successfully, but these errors were encountered: