Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

windows 10, fails to get to chat prompt #209

Closed
thistleknot opened this issue Apr 7, 2023 · 8 comments
Closed

windows 10, fails to get to chat prompt #209

thistleknot opened this issue Apr 7, 2023 · 8 comments

Comments

@thistleknot
Copy link

(webgpt) H:\alpaca.cpp\Release>hello
'hello' is not recognized as an internal or external command,
operable program or batch file.

(webgpt) H:\alpaca.cpp\Release>chat.exe
main: seed = 1680832932
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: ggml ctx size = 6065.34 MB
llama_model_load: memory_size = 2048.00 MB, n_mem = 65536
llama_model_load: loading model part 1/1 from 'ggml-alpaca-7b-q4.bin'
llama_model_load: .................................... done
llama_model_load: model size = 4017.27 MB / num tensors = 291

system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 |
main: interactive mode on.
sampling parameters: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000

@usamaehsan
Copy link

facing same issue, have you solved it?

@simsim314
Copy link

I guess you just need to wait, it takes time - but eventually, it finishes, I mean half an hour, go grab a coffee. Not sure that this is the case... but most probable one. It takes 10 minutes on my comp, with 13B, to get to the start of the chat.

P.S. This is was an important part:

(webgpt) H:\alpaca.cpp\Release>hello
'hello' is not recognized as an internal or external command,
operable program or batch file.

@thistleknot
Copy link
Author

thistleknot commented Apr 7, 2023 via email

@betolley
Copy link

betolley commented Apr 7, 2023

What CPU do you use? My I7 didn't support AVX2 and this happened to me. I changed the CMakelists to
from
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /arch:AVX2")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /arch:AVX2")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /arch:AVX2")

to

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /arch:AVX")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /arch:AVX")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /arch:AVX")

In a Virtual computer I even did
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE})
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}")

@thistleknot
Copy link
Author

thistleknot commented Apr 7, 2023 via email

@aka4el
Copy link

aka4el commented Apr 7, 2023

#204 (comment) #160 (comment) struggled with the same problem

@thistleknot
Copy link
Author

That did it
image

Curious, does anyone know how to get this to run on Cuda? My Cuda capable GPU has 4GB of VRAM...

Could I in theory compile this with nvcc (I'm sure the code would have to be refactored). I have yet to see someone post a CUDA enabled alpaca... all I see are these CPU builds saying they will run on phones, yet they struggle to run on my I7 due to not having AVX2 I suppose.

@Red007Master
Copy link

What CPU do you use? My I7 didn't support AVX2 and this happened to me. I changed the CMakelists to from set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /arch:AVX2") set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /arch:AVX2") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /arch:AVX2")

to

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /arch:AVX") set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /arch:AVX") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /arch:AVX")

In a Virtual computer I even did set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ") set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE}) set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}")

Thanks, worked for me, indeed my CPU did not support AVX2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants