-
Notifications
You must be signed in to change notification settings - Fork 906
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
windows 10, fails to get to chat prompt #209
Comments
facing same issue, have you solved it? |
I guess you just need to wait, it takes time - but eventually, it finishes, I mean half an hour, go grab a coffee. Not sure that this is the case... but most probable one. It takes 10 minutes on my comp, with 13B, to get to the start of the chat. P.S. This is was an important part: (webgpt) H:\alpaca.cpp\Release>hello |
The problem is it drops me back to a command prompt
…On Fri, Apr 7, 2023, 5:29 AM simsim314 ***@***.***> wrote:
I guess you just need to wait, it takes time - but eventually, it
finishes, I mean half an hour, go grab a coffee. Not sure that this is the
case... but most probable one. It takes 10 minutes on my comp, with 13B, to
get to the start of the chat.
P.S. This is was an important part:
(webgpt) H:\alpaca.cpp\Release>hello
'hello' is not recognized as an internal or external command,
operable program or batch file.
—
Reply to this email directly, view it on GitHub
<#209 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABHKKORL4RKN3MTVP3QSME3XAACA3ANCNFSM6AAAAAAWWCZYSE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
What CPU do you use? My I7 didn't support AVX2 and this happened to me. I changed the CMakelists to to set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /arch:AVX") In a Virtual computer I even did |
I7 3740qm I don't see avx2 just avx
I'll try this, and let you know, thx
…On Fri, Apr 7, 2023, 9:07 AM betolley ***@***.***> wrote:
What CPU do you use? My I7 didn't support AVX2 and this happened to me. I
changed the CMakelists to
from
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /arch:AVX2")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /arch:AVX2")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /arch:AVX2")
to
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /arch:AVX")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /arch:AVX")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /arch:AVX")
In a Virtual computer I even did
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE})
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}")
—
Reply to this email directly, view it on GitHub
<#209 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABHKKOQ3262V242WBUPCTJLXAA3T7ANCNFSM6AAAAAAWWCZYSE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
#204 (comment) #160 (comment) struggled with the same problem |
Curious, does anyone know how to get this to run on Cuda? My Cuda capable GPU has 4GB of VRAM... Could I in theory compile this with nvcc (I'm sure the code would have to be refactored). I have yet to see someone post a CUDA enabled alpaca... all I see are these CPU builds saying they will run on phones, yet they struggle to run on my I7 due to not having AVX2 I suppose. |
Thanks, worked for me, indeed my CPU did not support AVX2. |
(webgpt) H:\alpaca.cpp\Release>hello
'hello' is not recognized as an internal or external command,
operable program or batch file.
(webgpt) H:\alpaca.cpp\Release>chat.exe
main: seed = 1680832932
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: ggml ctx size = 6065.34 MB
llama_model_load: memory_size = 2048.00 MB, n_mem = 65536
llama_model_load: loading model part 1/1 from 'ggml-alpaca-7b-q4.bin'
llama_model_load: .................................... done
llama_model_load: model size = 4017.27 MB / num tensors = 291
system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 |
main: interactive mode on.
sampling parameters: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000
The text was updated successfully, but these errors were encountered: