From 2fd2a0b3660f8d62bbe25576ab67e91c3ac15bde Mon Sep 17 00:00:00 2001 From: wenkel1x Date: Thu, 16 May 2024 11:05:59 +0800 Subject: [PATCH 1/3] add compression weight mode to INT4 failure description --- llm_bench/python/doc/NOTES.md | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/llm_bench/python/doc/NOTES.md b/llm_bench/python/doc/NOTES.md index 90968abf74..3cbb6de85b 100644 --- a/llm_bench/python/doc/NOTES.md +++ b/llm_bench/python/doc/NOTES.md @@ -61,4 +61,15 @@ Solution: update `tokenization_baichuan.py` as following:
- self.add_eos_token = add_eos_token - self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) - self.sp_model.Load(vocab_file) -``` \ No newline at end of file +``` + +## CompressWeights Mode INT4 - ConnectionError: Couldn't reach 'wikitext' on the Hub (SSLError) +Download LLM from hugginface, convert to OpenVINO IR files and run with convert.py and CompressWeights Mode to INT4, the following error may occur: +```bash +raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e)._name_})") +ConnectionError: Couldn't reach 'wikitext' on the Hub (SSLError) +``` +root cause: The wikitext data set was not downloaded correctly, or the Hugging Face Hub network could not be connected normally.
+Solution:
+Your data can be stored in various places; they can be on your local machine’s disk, in a Github repository, and in in-memory data structures like Python dictionaries and Pandas DataFrames. Wherever a dataset is stored. Datasets can help you load it from local, +and how to load the dataset from local, please refer to https://huggingface.co/docs/datasets/loading#arrow
From 3a3f4523350f5acb1ff2b223c7b9154ed0eab223 Mon Sep 17 00:00:00 2001 From: wenkel1x Date: Mon, 27 May 2024 10:24:57 +0800 Subject: [PATCH 2/3] add detailed description --- llm_bench/python/doc/NOTES.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/llm_bench/python/doc/NOTES.md b/llm_bench/python/doc/NOTES.md index 3cbb6de85b..14c66bbe70 100644 --- a/llm_bench/python/doc/NOTES.md +++ b/llm_bench/python/doc/NOTES.md @@ -71,5 +71,4 @@ ConnectionError: Couldn't reach 'wikitext' on the Hub (SSLError) ``` root cause: The wikitext data set was not downloaded correctly, or the Hugging Face Hub network could not be connected normally.
Solution:
-Your data can be stored in various places; they can be on your local machine’s disk, in a Github repository, and in in-memory data structures like Python dictionaries and Pandas DataFrames. Wherever a dataset is stored. Datasets can help you load it from local, -and how to load the dataset from local, please refer to https://huggingface.co/docs/datasets/loading#arrow
+please Refer to https://huggingface.co/docs/datasets/loading#arrow , copy wikitest data cache set to ~/.cache/huggingface/datasets/ folder, Set the environment variable HF_DATASETS_OFFLINE to 1 to enable full offline mode. \ No newline at end of file From da31f28959099016c92d9010b65de1c5802f1b73 Mon Sep 17 00:00:00 2001 From: Chen Peter Date: Mon, 27 May 2024 22:34:18 +0800 Subject: [PATCH 3/3] Update llm_bench/python/doc/NOTES.md --- llm_bench/python/doc/NOTES.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/llm_bench/python/doc/NOTES.md b/llm_bench/python/doc/NOTES.md index 14c66bbe70..8d84b4e8c8 100644 --- a/llm_bench/python/doc/NOTES.md +++ b/llm_bench/python/doc/NOTES.md @@ -71,4 +71,4 @@ ConnectionError: Couldn't reach 'wikitext' on the Hub (SSLError) ``` root cause: The wikitext data set was not downloaded correctly, or the Hugging Face Hub network could not be connected normally.
Solution:
-please Refer to https://huggingface.co/docs/datasets/loading#arrow , copy wikitest data cache set to ~/.cache/huggingface/datasets/ folder, Set the environment variable HF_DATASETS_OFFLINE to 1 to enable full offline mode. \ No newline at end of file +Refer to https://huggingface.co/docs/datasets/loading#arrow , copy wikitext data set to ~/.cache/huggingface/datasets/ folder, set the environment variable HF_DATASETS_OFFLINE to 1. \ No newline at end of file