diff --git a/en/ai/ai-providers-and-api-keys.md b/en/ai/ai-providers-and-api-keys.md index 796de3c4d..475fa4201 100644 --- a/en/ai/ai-providers-and-api-keys.md +++ b/en/ai/ai-providers-and-api-keys.md @@ -63,8 +63,8 @@ Now you need to copy and paste it in JabRef preferences. To do this: 1. Launch JabRef 2. Go "File" -> "Preferences" -> "AI" (a new tab!) 3. Check "Enable chatting with PDFs" -3. Paste the key into "OpenAI token" -9. Click "Save" +4. Paste the key into "OpenAI token" +5. Click "Save" If you have some money on your credit balance, you can chat with your library! @@ -90,8 +90,8 @@ You don't have to pay any cent for Hugging Face in order to send requests to LLM 1. Launch JabRef 2. Go "File" -> "Preferences" -> "AI" (a new tab!) 3. Check "Enable chatting with PDFs" -3. Paste the key into "OpenAI token" -9. Click "Save" +4. Paste the key into "OpenAI token" +5. Click "Save" If you have some money on your credit balance, you can chat with your library! diff --git a/en/ai/local-llm.md b/en/ai/local-llm.md index 0163e13fd..25e15d2f3 100644 --- a/en/ai/local-llm.md +++ b/en/ai/local-llm.md @@ -1,6 +1,7 @@ ## BONUS: running a local LLM model Notice: + 1. This tutorial is intended for expert users 2. Local LLM model requires a lot of computational power 3. Smaller models typically have less performance then bigger ones like OpenAI models @@ -10,6 +11,7 @@ Notice: You can use any program that will create a server with OpenAI compatible API. After you started your service, you can do this: + 1. The "Chat Model" field in AI preference is editable, so you can write any model you have downloaded 2. There is a field "API base URL" in "Expert Settings" where you need to supply the address of an OpenAI API compatible server diff --git a/en/ai/preferences.md b/en/ai/preferences.md index 67d01cde8..4d9cc8cff 100644 --- a/en/ai/preferences.md +++ b/en/ai/preferences.md @@ -8,7 +8,7 @@ Here are some new options in the JabRef preferences. - "AI provider": you can choose either OpenAI, Mistral AI, or Hugging Face - "Chat model": choose the model you like (for OpenAI we recommend `gpt-4o-mini`, as it the cheapest and fastest) - "API token": here you write your API token -- "Expert settings": here you can change the parameters that affect how AI will generate your answers. If you don't understand the meaning of those settings, don't worry! We have experimented a lot and found the best parameters for you! But if you are curious, then you can refer to [user documentation]() +- "Expert settings": here you can change the parameters that affect how AI will generate your answers. If you don't understand the meaning of those settings, don't worry! We have experimented a lot and found the best parameters for you! But if you are curious, then you can refer to the AI expert settings section. ## AI expert settings