-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chatbot only produces 1 answer despite beam width being 20 #169
Comments
Inference.py returns the highest scoring response. If you look in inference.py you will see a line at the bottom that mentions max (Not at my pc rn cannot confirm line number), that is returning the highest score. If you want to print all responses you'll have to change this section of code. Hope this helps. |
Does anyone know what line needs to be changed to fix this? |
On line 308, in inference.py:
If you just want all the answers produced, try changing this to That should print out all the answers that inference returns |
Hey everyone! I finally solved this issue after doing a lot of R&D on the nmt package. The reason why your chatbot produces only one response is due to the hyper-parameter: 'infer-mode' set to 'greedy'. Go to setup/settings.py and in the hparams dict add a new key: 'infer-mode' and set it to 'beam-search'. This will solve your issue, thanks! |
Hello I have tried adding new scoring functions but none have changed the chatbot's responses. After I checked why it is because the model only produces 1 response and therefore the scoring is pointless. I checked the settings and the beam width is at 20 so it should choose from a list of 20 as far as I know. However when I use the deployed version and integrate it and tell it to list all the answers to a user input it only produces 1 answer.
Thank you for helping :)
The text was updated successfully, but these errors were encountered: