-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simple LLM for bots #694
Comments
In that u need make chat function modulair as in plugin model. First implementation should be static talk, the second would be the optional LLM talk. Next to that doing research how much load a light weight LLM can carry and in combination with real-time data. Making a plugin model voor playerbots and implement static chatter through that interface would be a good start and test. You could build the LLM impl as optional on top that. |
You can actually make a module that uses ChatGPT via OpenAI API. |
I'd tried creating a custom GPT-4 with azerothcore and playerbots source code provided as knowledge. It was better than than without the provided source, but would still frequently hallucinate functions/etc. https://chatgpt.com/g/g-9bmAeRkA7-azcore-playerbots GPT o1 has been much better, even without the ability to give it the repos as knowledge it was providing excellent code with no hallucinations. The prompt limits are much harsher on this model (50/week for preview version, 50/day for lite version) but the answers it provides are very detailed so I found I wasn't using many prompts to get a decent answer. The context size still seems to be the main issue when working on complex issues. Work on a single function per GPT chat session, use a new chat session for each function you need to investigate unless they are VERY closely related. |
Tried your GPT but seems just source code was uploaded. Though I suspect working with it while debugging might work well? My thoughts was something like a very small llama model that is tuned to “talk” like players. Maybe giving commands is too much like “give me some gold” but thinking something like player:”/1 what you guys doing?” Bots:”working on quest xyz”. Another reason for not looking into GPT is wouldn’t want users to have to pay for tokens and maybe a dumb local model that maybe takes 4GB add to the “fun factor”. |
wondering how taxing/difficult it would be to have a trained LLM based on wotlk “lingo” and implement it with the bots? Maybe some light 7b instruct LLM or even have modularity between models or default to the sql “static” chatter.
just a fun thought. I’ll poke at some of the code tomorrow and see how feasible this would be for Claude to help me through it lol
The text was updated successfully, but these errors were encountered: