-
-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Groq compatibility? #136
Comments
For me adding the groq provider works. Example :
|
Can't work on my nvim. |
@yuukibarns could you provide your gp configuration, outputs from Just be careful not to paste your secrets. |
The error msg is .html format which is hard to read, so i ignored it before. |
Why have I been blocked?This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solu tion. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data. |
And it told me that |
@yuukibarns the message is from ClaudFlare, can't provide more help without config and GpInspect data. |
{
"robitx/gp.nvim",
config = function()
local conf = {
providers = {
groq = {
disable = false,
endpoint = "https://api.groq.com/openai/v1/chat/completions",
secret = os.getenv("GROQ_API_KEY"),
},
openai = {
disable = true,
endpoint = "https://api.openai.com/v1/chat/completions",
-- secret = os.getenv("OPENAI_API_KEY"),
},
},
agents = {
{
name = "ChatGroqLlama3.1-70B",
provider = "groq",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = {
model = "llama-3.1-70b-versatile",
temperature = 0.6,
top_p = 1,
min_p = 0.05,
},
system_prompt = require("gp.defaults").chat_system_prompt,
},
{
name = "CodeGroqLlama3.1-70B",
provider = "groq",
chat = false,
command = true,
model = {
model = "llama-3.1-70b-versatile",
temperature = 0.4,
top_p = 1,
min_p = 0.05,
},
system_prompt = require("gp.defaults").code_system_prompt,
},
},
}
require("gp").setup(conf)
-- Setup shortcuts here (see Usage > Shortcuts in the Documentation/Readme)
end,
}, |
Plugin structure: |
@yuukibarns yep, sadly looks like a geo fence issue 🙁 https://github.com/groq/groq-python/issues/32 I've had similar troubles with gemini in EU, if you can get some proxy to fake your location to US you could use -- optional curl parameters (for proxy, etc.)
-- curl_params = { "--proxy", "http://X.X.X.X:XXXX" }
curl_params = {}, |
Thanks for your analysis. |
local env = require "env"
local OPENAI_KEY = env.OPENAI_KEY
local GROQ_KEY = env.GROQ_KEY
local OPENAI_HOST = "https://api.openai.com/v1/chat/completions"
local GROQ_HOST = "https://api.groq.com/openai/v1/chat/completions"
local GROQ_AUDIO = "https://api.groq.com/openai/v1/audio/transcriptions"
-- Gp (GPT prompt) lua plugin for Neovim
-- https://github.com/Robitx/gp.nvim/
--------------------------------------------------------------------------------
-- Default config
--------------------------------------------------------------------------------
---@class GpConfig
-- README_REFERENCE_MARKER_START
local config = {
providers = {
openai = {
disable = false,
endpoint = OPENAI_HOST,
secret = OPENAI_KEY,
},
groq = {
disable = false,
endpoint = GROQ_HOST,
secret = GROQ_KEY,
},
},
chat_shortcut_respond = { modes = { "n", "i", "v", "x" }, shortcut = "<C-g><cr>" },
chat_confirm_delete = false,
-- prefix for all commands
cmd_prefix = "Gp",
default_chat_agent = "GroqLLAMA_8B",
whisper = {
-- -- TODO: In the future, when gpnvim will support whisper options
-- endpoint = GROQ_AUDIO,
-- secret = GROQ_KEY,
},
agents = {
{
provider = "openai",
name = "ChatGPT4o",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "gpt-4o", temperature = 0.8, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are an AI working as a code editor.\n\n"
.. "Please AVOID COMMENTARY OUTSIDE OF THE SNIPPET RESPONSE.\n"
.. "START AND END YOUR ANSWER WITH:\n\n```",
},
{
provider = "openai",
name = "ChatGPT4o-mini",
chat = true,
command = true,
-- string with model name or table with model name and parameters
model = { model = "gpt-4o-mini", temperature = 0.8, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are an AI working as a code editor.\n\n"
.. "Please AVOID COMMENTARY OUTSIDE OF THE SNIPPET RESPONSE.\n"
.. "START AND END YOUR ANSWER WITH:\n\n```",
},
{
provider = "groq",
name = "GroqLLAMA_8B",
chat = true,
command = true,
-- string with model name or table with model name and parameters
model = { model = "llama-3.1-70b-versatile", temperature = 0.8, top_p = 1 },
system_prompt = "You are an AI helping the user with code and other tasks\n\n"
.. "Please AVOID COMMENTARY OUTSIDE OF THE SNIPPET RESPONSE.\n",
},
{
provider = "groq",
name = "GroqLLAMA_8B",
chat = true,
command = true,
-- string with model name or table with model name and parameters
model = { model = "llama-3.2-11b-text-preview", temperature = 0.8, top_p = 1 },
system_prompt = "Given a task or problem, please provide a concise and well-formatted solution or answer.\n\n"
.. "Please keep your response within a code snippet, and avoid unnecessary commentary.\n",
}
},
}
--
return {
"robitx/gp.nvim",
event = "BufEnter",
config = function() require("gp").setup(config) end,
} Oh, man. This plugin with Groq is just insane! Its very very fast. Makes it feel like a good Supermaven companion which is another tool I use. Note that |
Hi, I access
llama3-70b
through groq using this python tool. I was hoping to also use groq with gp.nvim. Any plans to support this?The text was updated successfully, but these errors were encountered: