You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After training an LLM such as for the wikitext task, it would be expected to be able to prompt the model and generate some text. Right now, no such UI is available, we can only test and predict on connected text files.
We can implement a simple text box where users can input free text, tweak the generation parameters and see the model's completion in real time in another text field below.
The text was updated successfully, but these errors were encountered:
because of the weird way that gpt-tfjs is coded (hijacking tfjs internals, redoing a bunch of tfjs), I'm not really able to find where and how it is triggered. maybe @JulienVig you knows a bit more on that as you are the one who lastly tried to fix some of its issues.
as previously stated, this gpt-tfjs code is bad. it will need to be rewritten or changed with OONX once we have that. putting this feature on hold in the meantime (code on 684-add-chat-with-llm-tharvik branch)
after hitting TypeError: info is undefined elsewhere, it seems that it was due to vue's proxy wrapping of model which do not behave nicely with it. toRaw before training with it should work around it
After training an LLM such as for the wikitext task, it would be expected to be able to prompt the model and generate some text. Right now, no such UI is available, we can only test and predict on connected text files.
We can implement a simple text box where users can input free text, tweak the generation parameters and see the model's completion in real time in another text field below.
The text was updated successfully, but these errors were encountered: