Skip to content

Commit

Permalink
Fix from linter
Browse files Browse the repository at this point in the history
  • Loading branch information
InAnYan committed Aug 12, 2024
1 parent 79cea79 commit 8984d5b
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 5 deletions.
8 changes: 4 additions & 4 deletions en/ai/ai-providers-and-api-keys.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,8 +63,8 @@ Now you need to copy and paste it in JabRef preferences. To do this:
1. Launch JabRef
2. Go "File" -> "Preferences" -> "AI" (a new tab!)
3. Check "Enable chatting with PDFs"
3. Paste the key into "OpenAI token"
9. Click "Save"
4. Paste the key into "OpenAI token"
5. Click "Save"

If you have some money on your credit balance, you can chat with your library!

Expand All @@ -90,8 +90,8 @@ You don't have to pay any cent for Hugging Face in order to send requests to LLM
1. Launch JabRef
2. Go "File" -> "Preferences" -> "AI" (a new tab!)
3. Check "Enable chatting with PDFs"
3. Paste the key into "OpenAI token"
9. Click "Save"
4. Paste the key into "OpenAI token"
5. Click "Save"

If you have some money on your credit balance, you can chat with your library!

Expand Down
2 changes: 2 additions & 0 deletions en/ai/local-llm.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
## BONUS: running a local LLM model

Check failure on line 1 in en/ai/local-llm.md

View workflow job for this annotation

GitHub Actions / lint

First line in a file should be a top-level heading [Context: "## BONUS: running a local LLM ..."]

en/ai/local-llm.md:1 MD041/first-line-heading/first-line-h1 First line in a file should be a top-level heading [Context: "## BONUS: running a local LLM ..."]

Notice:

1. This tutorial is intended for expert users
2. Local LLM model requires a lot of computational power
3. Smaller models typically have less performance then bigger ones like OpenAI models
Expand All @@ -10,6 +11,7 @@ Notice:
You can use any program that will create a server with OpenAI compatible API.

After you started your service, you can do this:

1. The "Chat Model" field in AI preference is editable, so you can write any model you have downloaded
2. There is a field "API base URL" in "Expert Settings" where you need to supply the address of an OpenAI API compatible server

Expand Down
2 changes: 1 addition & 1 deletion en/ai/preferences.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Here are some new options in the JabRef preferences.
- "AI provider": you can choose either OpenAI, Mistral AI, or Hugging Face
- "Chat model": choose the model you like (for OpenAI we recommend `gpt-4o-mini`, as it the cheapest and fastest)
- "API token": here you write your API token
- "Expert settings": here you can change the parameters that affect how AI will generate your answers. If you don't understand the meaning of those settings, don't worry! We have experimented a lot and found the best parameters for you! But if you are curious, then you can refer to [user documentation]()
- "Expert settings": here you can change the parameters that affect how AI will generate your answers. If you don't understand the meaning of those settings, don't worry! We have experimented a lot and found the best parameters for you! But if you are curious, then you can refer to the AI expert settings section.

## AI expert settings

Expand Down

0 comments on commit 8984d5b

Please sign in to comment.