-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed minor typo in README #17
base: main
Are you sure you want to change the base?
Conversation
WayneWang86
commented
Jun 13, 2023
- Typo in variable names
- When using other parameters, have to put the prompts into a list for other parameters to work
…ts without reloading the model Signed-off-by: WayneWang86 <[email protected]>
Signed-off-by: WayneWang86 <[email protected]>
Signed-off-by: Yifeng Wang <[email protected]>
Signed-off-by: Yifeng Wang <[email protected]>
Signed-off-by: Yifeng Wang <[email protected]>
Signed-off-by: WayneWang86 <[email protected]>
Signed-off-by: WayneWang86 <[email protected]>
Hey @WayneWang86 ! Thanks for the PR! |
Hi Patrick,
I added a scoring function to the server that would output a probability
score to each token for the given prompt. I have tested it and we used it
for yesterday's Genentive AI tutorial session, and the functionalities seem
to be as expected! However, there are files like
`inference_server/model_handler/grpc_utils/generation_server.py` and
`inference_server/model_handler/grpc_utils/proto/generation_pb2_grpc.py`
I've made some changes to and I later realized it doesn't affect the
scoring functions so I commented out the code for now. It would be great if
you could take a look to see if these changes are necessary!
I can submit a pull request now! As for the original output_score functions
with paddings, I haven't made solid progress yet and I will let you know if
I got that worked as well!
Best,
Wayne Wang
…On Mon, Jun 19, 2023 at 1:03 PM Patrick Fernandes ***@***.***> wrote:
Hey @WayneWang86 <https://github.com/WayneWang86> ! Thanks for the PR!
I had a brief look and seems to add a scoring functionality? Is this
working already? If so we should try to merge it, I can test it
—
Reply to this email directly, view it on GitHub
<#17 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AORLIIS67HLZRUZAV6JXDO3XMCA7NANCNFSM6AAAAAAZFJL6H4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Cool, ye lets work on getting this merged! And again, thanks for the contribution! |
Sure! I am outside right now and I can work on this later today!
…On Mon, Jun 19, 2023 at 13:18 Patrick Fernandes ***@***.***> wrote:
Cool, ye lets work on getting this merged!
I just merged an old LLaMA PR that was hanging. Can you merge main against
your branch and fix conflicts (and check if tests/CI passed)? After Github
and CI says everything is ready I'll test it
And again, thanks for the contribution!
—
Reply to this email directly, view it on GitHub
<#17 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AORLIIS37PLU4KKMY6AQS73XMCCW7ANCNFSM6AAAAAAZFJL6H4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Signed-off-by: WayneWang86 <[email protected]>
merge from neulab-main
Merged update from nuelab-main. Added new scoring functions to output probability scores for each token in a given prompt. |