Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated llama to the latest GGML commit #21

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

polkaulfield
Copy link

No description provided.

dsd and others added 8 commits June 30, 2023 22:03
Set the main default prompt to chat-with-bob from llama.cpp.
This seems to produce much more useful conversations with llama-7b and
orca-mini-3b models that I have tested.

Also make the reverse prompt consistently "User:" in both default prompt
options, and set the default reverse prompt detection to the same value.
llama.cpp doesn't build for ARM32 because it calls into 64 bit neon
intrinsics. Not worth fixing that; lets just not offer this app on
ARM32.
Rather than using prebuilt libraries, build the llama.cpp git submodule
during the regular app build process.

The library will now be installed in a standard location, which simplifies
the logic needed to load it at runtime; there is no need to ship it as an
asset.

This works on Android, and also enables the app to build and run on Linux.
Windows build is untested.

One unfortunate side effect is that when building the app in Flutter's
debug mode, the llama lib is built unoptimized and it works very very
slowly, to the point where you might suspect the app is broken.
However release mode seems as fast as before.
Update llama.cpp to the latest version as part of an effort to make this
app usable on my Samsung Galaxy S10 smartphone.

The newer llama.cpp includes a double-close fix which was causing the app
to immediately crash upon starting the AI conversation (llama.cpp commit
47f61aaa5f76d04).

It also adds support for 3B models, which are considerably smaller. The
llama-7B models were causing Android's low memory killer to terminate
Sherpa after just a few words of conversation, whereas new models such as
orca-mini-3b.ggmlv3.q4_0.bin work on this device without quickly exhausting
all available memory.

llama.cpp's model compatibility has changed within this update, so ggml
files that were working in the previous version are unlikely to work now;
they need converting. However the orca-mini offering is already in the
new format and works out of the box.

llama.cpp's API has changed in this update. Rather than rework the Dart
code, I opted to leave it in C++, using llama.cpp's example code as a base.
This solution is included in a new "llamasherpa" library which calls
into llama.cpp. Since lots of data is passed around in large arrays,
I expect running this in Dart had quite some overhead, and this native
approach should perform considerably faster.

This eliminates the need for Sherpa's Dart code to call llama.cpp directly,
so there's no need to separately maintain a modified version of llama.cpp
and we can use the official upstream.
On first run on my Android device, the pre-prompt is empty, it does
not get initialized to any value.

This is because SharedPreferences performs asynchronous disk I/O,
and initDefaultPrompts() uses a different SharedPreferences instance from
getPrePrompts(). There's no guarantee that a preferences update on one
instance will become immediately available in another.

Tweak the logic to not depend on synchronization between two
SharedPreferences instances.
The llama.cpp logic is built around the prompt ending with the
reverse-prompt and the actual user input being passed separately.

Adjust Sherpa to do the same, rather than appending the first line of
user input to the prompt.
@danemadsen
Copy link

I think this repo has been abandoned. Ive taken over development and updated the app with a new UI and added support for GGUF models.

https://github.com/MaidFoundation/Maid

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants