-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
config.js #11
Comments
Hi @jms92100, The |
thanks for this answer . i'll give another try today . bringing all datas into one dir to ensure it finds every file needded |
ok , did the test, followed your suggestion on limited memory and tried the 7B . It went through. yeah , then got the bitsandbytes warning but it did the job . So went to the f16 and quantitzed. Run the main llama.cpp and ask the question ..
got... Yes, I think that the artistic practices can change the world. The artists are able to create new ideas and concepts which can be used in many different ways. For example, a painter can paint a picture of a person who is doing something very important for his country or his family. This painting can be seen by many people and it will make them think about this person's life. wich is good but not what i was looking to see. new try
and got Instruction:ok it seems that something went wrong dont know where. |
did some modifications on the parameters and got this >>
finally a good answer . |
Bonjour,
j'ai transféré les modeles de llama.cpp sous le répertoire vigogne
avec python3.10 ./scripts/convert_llama_weights_to_hf.py --input_dir ./models --model_size 13B --output_dir ./models/vigogne
Fetching all parameters from the checkpoint at ./models/13B.
il m'indique >>>>>Killed
j'ai isupposé que mes fichiers consolidated.00.pth et consolidated.01.pth étaient déjà au format
python3.10 ./scripts/export_state_dict_checkpoint.py --base_model_name_or_path ./models/13B --lora_model_name_or_path
"bofenghuang/vigogne-lora-13b" --base_model_size 13B --output_dir ./models/vigogne
et la ..
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bref ça dit que bitsandbtytes n'a pas été compilé avec GPU support... et finalement me dit
OSError: ./models/13B does not appear to have a file named config.json.
j'ai cherché un peu partout ce config.json mais rien .
je précise que je suis sur wsl w11 donc j'ai un environnement Ubuntu 22.04 qui me permet de lancer llama.cpp avec 7B 13B 30B gpt4all
mais l'install de vigogne est difficile
Qu'est ce qui me manque ? des idées
The text was updated successfully, but these errors were encountered: