You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
Improvements
Added all of these chat models to select from to chat with the vector database!
Added all of these vision models to select from when processing images.
Added ChatTTS and google TTS as text to speech backends.
Added hyperlinks to all model cards for vector, chat, and vision models.
Added the ability to restore backups of your databases.
Significantly improved the "Test Vision Model" tool to be able to test all vision models.
Revamped the User Manual
Added a pull-down menu to select the vector model instead of having to navigate and select a particular folder.
MASSIVE refactoring.
MASSIVE restructuring of the setup.py and requirements.txt due to the crazy increase in dependencies. "Dependency "hell" is a real thing...
CUDA no longer required (sort of):
Previously, you were required to install CUDA and CUDNN directly from Nvidia, which was by its very nature a system-wide installation.
Now, ALL CUDA and CUDNN-related files are "pip installed" within the setup.py script. This allows you to either install a different version of CUDA system-wide to be used by other programs or, alternatively, not install CUDA system-wide at all.
This was achieved by installing CUDA and CUDNN as a python dependency and temporarily adding these specific installations to the PATH whenever the program is running.
IMPORTANT...restructuring of model downloading procedure
Previously, the vector models were downloaded using a git clone command and other types of models (i.e. chat, vision, TTS, whisper.) were automatically downloaded to the system's cache folder.
Now, almost all of the models (except vector models) are downloaded to a specific sub-folder within the Models folder. In future releases all models will be downloaded this way.
The goal is to make the program as portable as possible; for example, copying the "src" folder (and therefore all the models) to a thumb drive to use on a laptop without having to re-download everything. Eventually, all paths to models selected within the program will be "relative" such that wherever you move the src folder (even to a different computer) everything should work just like it did without having to download anything again or change any settings.
Bug Fixes
Fixed the "local model" not being removed from memory even after a different chat model was is loaded.
Fixed the "local model" exponentially increasing memory usage when asking multiple questions (best guess, it was was re-loading the model each time).
Re-added a script that was accidentally deleted from this repository during the last release...
Fixed a huge issue involving sentence-transformers, TileDB, and Langchain that threw a variety of un-fixable errors when trying to create a vector database. This required modifying the source code for sentence-transformers itself as a temporary fix, but everything seems to be working much better so this may become a permanent fix.
Fixed numerous other bugs.
Known issues
Image search is NOT WORKING but will be fixed in an incremental release.
There's an issue with Langchain specific to TileDB; specifically, the from_documents method. A temporary work around was to actually modify the sentence-transformers source code. A subsequent patch will likely use the from_texts method instead, but the database seems to be working fine.
Please create an issue with any bugs you encounter!
Credit goes to the new Claude 3.5 Sonnet for finally being able to solve the memory issue regarding loading/unloading chat models in a separate process nonetheless.