Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

运行llama2-7b_int8_1dev.bmodel模型时报错 #43

Open
17656178609 opened this issue Jul 3, 2024 · 2 comments
Open

运行llama2-7b_int8_1dev.bmodel模型时报错 #43

17656178609 opened this issue Jul 3, 2024 · 2 comments

Comments

@17656178609
Copy link

./llama2.soc llama2-7b_int8_1dev.bmodel
Demo for LLama2-7B in BM1684X
Init Environment ...
Load tokenizer.model ... Done!
Device [ 0 ] loading ....
[BMRT][bmcpu_setup:406] INFO:cpu_lib 'libcpuop.so' is loaded.
bmcpu init: skip cpu_user_defined
open usercpu.so, init user_cpu_init
Model[llama2-7b_int8_1dev.bmodel] loading ....
[BMRT][load_bmodel:1084] INFO:Loading bmodel from [llama2-7b_int8_1dev.bmodel]. Thanks for your patience...
[BMRT][load_bmodel:1023] INFO:pre net num: 0, load net num: 66
[BMODEL][read_binary:461] FATAL: size + offset <= binary->size()

@sophon-leevi
Copy link
Collaborator

请提供完整的复现流程,包括不限于以下要素:
1.测试平台。
2.测试的SDK版本,包括libsophon的版本,可以通过bm-smi、bm_version命令查看,bm_version命令仅限soc模式下使用。
3.提供bm-smi显示的TPU内存大小。
4.是否有按照README要求修改内存分布。

@17656178609
Copy link
Author

image 1. 测试平台是AiBox-1684x 2. bm_version如下,bm-smi如上图所示
SophonSDK version: 23.09 LTS
sophon-soc-libsophon : 0.4.9
sophon-soc-libsophon-dev : 0.4.9
sophon-mw-soc-sophon-ffmpeg : 0.8.0
sophon-mw-soc-sophon-opencv : 0.8.0
BL2 
BL31 psys-sdhc
KernelVersion : Linux aibox-bm1684x 5.4.217-bm1684 #3 SMP Tue Apr 23 10:23:57 CST 2024 aarch64 aarch64 aarch64 GNU/Linux
HWVersion: 0x00
MCUVersion: 0x02
  1. 内存修改我是按照minicpm修改的,如下所示,llama2没有看到修改内存的README.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants