diff --git a/README.md b/README.md index 4e6e254..87dda6e 100644 --- a/README.md +++ b/README.md @@ -75,7 +75,7 @@ BERT-based models: | 🤗 HF | Max Tokens | Pooling Strategy | Scenario | |----|------|------|------| | [WhereIsAI/UAE-Large-V1](https://huggingface.co/WhereIsAI/UAE-Large-V1) | 512 | cls | English, General-purpose | -| [WhereIsAI/UAE-Code-Large-V1](https://huggingface.co/WhereIsAI/UAE-Large-V1) | 512 | cls | Code Similarity | +| [WhereIsAI/UAE-Code-Large-V1](https://huggingface.co/WhereIsAI/UAE-Code-Large-V1) | 512 | cls | Code Similarity | LLM-based models: diff --git a/angle_emb/angle.py b/angle_emb/angle.py index f674681..31111eb 100644 --- a/angle_emb/angle.py +++ b/angle_emb/angle.py @@ -1146,6 +1146,9 @@ def __init__(self, self.apply_lora = True logger.info('LLM detected, automatically set apply_lora=True.' 'If it is wrong, you can manually set `apply_lora`.') + if pretrained_lora_path is not None: + self.apply_lora = True + if self.device == 'cuda': self.gpu_count = torch.cuda.device_count() elif self.device == 'mps':