Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speaker use: Differentiate between AI and user voices #37

Open
choombaa opened this issue Jul 3, 2023 · 6 comments
Open

Speaker use: Differentiate between AI and user voices #37

choombaa opened this issue Jul 3, 2023 · 6 comments

Comments

@choombaa
Copy link
Collaborator

choombaa commented Jul 3, 2023

Since the microphone is always listening, the AI will create a feedback loop with its own outputs if headphones aren't used. We could potentially differentiate between voices.

@KixxA1

This comment was marked as off-topic.

1 similar comment
@jmanhype

This comment was marked as off-topic.

@yacineMTB
Copy link
Owner

I noticed this too
can you filter voices out?
I guess zoom & macos do this

@yacineMTB
Copy link
Owner

Maybe we should consider whisperX (and also abstract whisper over an HTTP interface)

@yacineMTB
Copy link
Owner

yacineMTB commented Jul 8, 2023

oh fuck yeah, check it out
ggerganov/whisper.cpp#1058

image

@choombaa
Copy link
Collaborator Author

Very cool, putting this on my todo list

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants