-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get ready state of recognizer #80
Comments
I think I agree it would be nice to have, and maybe that feature is already there and I missed it. But... If you maintain your own ready state and set it to true once all the setup work is done, you should have the same thing. The main asynchronous delay is loading and creating the model which is then passed to the |
I have the following react-app: main.jsx: recognizer.jsx: `useEffect(() => {
}, []); // Load the default model when language changes` mic.jsx:
}, [recognizer]); useEffect(() => { Sorry to put my code in that bad, but its somehow not supported.. To explain: If the model has been loaded the loading flag is set to true and if the mic is allowed the ready flag is set. |
It's a little too much for me to find the issue in that code. I'll say that I would suspect that the setter functions (e.g. I recommend setting breakpoints and stepping through the code in a browser debugger, like Chrome's or Firefox's. You can narrow it to the exact point of execution where something is happening outside of your expectations. |
I mean the current code has the spinner circle until the user allows the mic to use, but it still takes some seconds until you can speak |
I don't trust my eyes and mind to sort out the React-based state logic above. That's why I say it might be useful for you to narrow down the issue with debugging. So for example, one thing I would verify is that the series of events is really like this:
Because maybe you're actually seeing something more like:
I often find unexpected behavior around If it helps in any way, here is my code handling initialization: https://github.com/erikh2000/sl-web-speech/blob/main/src/speech/Recognizer.ts |
Thanks! |
One way is to stop sending samples to the recognizer via .acceptWaveform(). So in your audioworklet, you can check a "muted" flag and just not send samples if the flag is set. That should cut way down on CPU. And it also has a nice guarantee that the recognizer isn't continuing to listen in some unexpected way that will make your users upset about privacy.
This is a really good question. I started to type an answer, and realized I was guessing beyond my knowledge. With the combination of web workers, WASM, and React component lifecycle, I'm just not 100% sure. A hypothesis is that 1. all execution in the recognizer stops when you stop calling .acceptWaveform() and 2. memory of the recognizer instance is freed by garbage collection some time after your component unmounts, if your recognizer instance is stored in a variable scoped to the component and nowhere else. On point #2, I prefer to keep the recognizer instance in a module-scoped variable that isn't bound to a React component. In this way, I can reuse the same recognizer instance even if the user exits a screen and returns. (My app has multiple screens, each rendered by a separate component) By module-scoped variable, I mean a declaration of a the recognizer instance like: `let recognizer = null; export function initRecognizer() { |
In my case I need the recognizer to be a |
Hi,
is there a way to get something like a ready state from the recognizer?
I have the problem that it takes some seconds until the recognizer is really recognizing speech from the user.
While it is in this not ready state I want a loading circle to wait until the recognizer is ready for use.
I am using getUserMedia() function for Audio.
The text was updated successfully, but these errors were encountered: