Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question related to project termination #46

Open
q5sys opened this issue Sep 13, 2023 · 1 comment
Open

Question related to project termination #46

q5sys opened this issue Sep 13, 2023 · 1 comment

Comments

@q5sys
Copy link

q5sys commented Sep 13, 2023

I don't intend to invade personal privacy, but I'm curious if dropping this project is just a life thing where you don't have time or your interests changed, or if you ran into a technical problem with it that you couldn't solve.

Since you wrote the code you know it, and I was curious if there was a technical reason before I tried to figure out if someone else could pick up where you left off and take it from there. If there are potential roadblocks that you've run into, that'd be helpful to know for someone else wanting to revive this project.

Thanks for all the work you did on this.

@wawawario2
Copy link
Owner

wawawario2 commented Sep 24, 2023

It's due to a combination of reasons but it boils down to the lack of time to keep this project running let alone keep it cutting-edge with the constant advancements in the LLM field.

There wasn't a specific technical issue I was experiencing. The core technology is actually conceptually simple, it's really just storing user/bot message embeddings in a vector database and a bunch of engineering around that. Although I have my doubts that this approach alone will be enough for a "truly immersive" long term memory bot, I can see it having its niche, potentially being part of a larger system in some capacity.

I'd be happy to see someone revive this project. If anyone chooses to do so, here are some possible directions. Please note that I'm not privy to the latest advancements in LLM tech so take these directions with a grain of salt.

Research:

  • Right now we get a lot of "filler" text because this extension chooses which memories to keep/reject based on a dumb message length filter. Is there a more clever way to do this? One possible approach is to use the LLM to summarize the last chat session then store these summaries as embeddings.
  • How would langcchain, SuperCOT, and the like impact the direction of research for this extension?
  • Other stuff in Issues

Engineering:

Misc:

  • What value do people get most from this extension? Many times the value people see differ from the vision of the project owner.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants