-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recognition Client Closes after a certain amount of time and/or hangup #6
Comments
It definitely sounds like the underlying Client to the Google Speech API is being shut down. My understanding of the Google Speech API clients is that the context passed to it when creating the Client is used for the underlying connection, whereas the context passed to the StreamingRecognize gRPC session is used to control the session connection. If, in fact, this latter is not the case, that could explain your results. Have you varied the value of MaxRecognitionDuration to check that it is really that 1-minute timeout which is the culprit? I chose that value as a completely arbitrary thing, figuring that in this demo, a call would not last more than 60s. |
I found the issue.
Instead of logging the recognition client closed, the stream should actually be closed here.
There is a defer out.CloseSend() at the top of the function, however, the for loop is never exited and the stream is never closed. This solution helped me. |
Aha, good catch. Do you want to issue a PR? for this fix? |
Yes I will submit a pull request for this issue. I have one more question about the AudioSocket. Currently I have a pod running the audiosocket. In the asterisk config, I have all calls going directly to the AudioSocket.
However, I see both the AGI and ARI applications point to the AudioSocket. Is the correct behavior to have the Asterisk call initiated through the ARI and AGI application and have them point to the audiosocket on a new call? |
There is no one correct way of using it. As AudioSocket is an Asterisk application, though, it is not possible to call it directly from ARI. AGI, on the other hand, can directly call Asterisk applications. In the simple example you posted, there is no need for either ARI or AGI. There is just a line in the dialplan which sends the call to a new AudioSocket connection. |
Awesome Thank You! Will send the pull request soon. |
Hello, I am testing out the voice-demo and everything works as expected. The only problem is, inside of the "pipeFromAsterisk" function, at some point after the first call, the audiosocket container gets stuck in the loop and continuously prints out "recognition client closed". I am would like to know if this was the intended behavior or not.
What I observed is there is a MaxCallTimeout of two minutes, however, the context that the speech recognition client is created with is the background context? This is the parent context of the audiosocket, however it is passed all the way down to the Handle Function that creates a child context from the parent. The child context is the context with the MaxRecognitionDuration Timeout and this is the context used to create the recognition stream. At one minute, the "recognition client closed" is printed, however, If I hang up the phone and call right back, I get an infinite loop of "recognition client closed" being printed to the console and getting nothing to or from Google for the speech.
Is there a fix to this behavior to all ways have a new stream on every new call?
The text was updated successfully, but these errors were encountered: