Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send multiple files into the chat #206

Open
UreshiiPanda opened this issue Sep 8, 2024 · 4 comments
Open

Send multiple files into the chat #206

UreshiiPanda opened this issue Sep 8, 2024 · 4 comments

Comments

@UreshiiPanda
Copy link

Looks like an awesome plugin, but after reading through the ReadMe it wasn't apparent to me how I can send whole files into the chat for the chat bot to reference when answering questions. Is this not a feature of gp.nvim?

Robitx added a commit that referenced this issue Sep 8, 2024
- starting with @context_file (issue: #206)
@qaptoR
Copy link

qaptoR commented Sep 11, 2024

Take a look at this branch of my fork.
https://github.com/qaptoR-nvim/gp.nvim/tree/insert_context

I adapted the work done in this PR: #174

basically, I felt the PR went too far and overcomplicated itself with by trying to be able to reference a specific function in a code file. A truly amazing feat they accomplished it, but definitely not something I personally need.

I took the basic template of their @code:relative/path/tofile which adds triple backtick fences around the imported code, as well references the file name. Then I added @text:... which does the same thing, but doesn't reference the filename or use backtick fences around the included context.

I also added @codedir and @textdir which basically includes all files in a directory path (all paths are relative to the cwd) using the same strategies as the non 'dir' related commands.

lastly I added an @import:... command, which is a recursive command, the searches the file at the given path for other commands and stiches them together into a single import. So you can create different binders of files to include together for different contexts.

The other thing I did different was make it so that the the imported text always appears IN the message where the command is present AND at the top of the message. so it's always necessary to say 'in the context above', rather than 'in the following context`, although I'm not sure if there are any studies that show which strategy leads to better results, I personally think it generally makes no difference to the LLM where the context is, so long as it is included so it can be referenced. And the current strategy leads to simpler code.
In general this means that the conversation remains consistent for each successive query, and past contexts will always be available in future queries. I do recommend using the cheaper models like haiku or gpt4o-mini, or gemini flash because the added contexts really expensive otherwise.

hope this helps! (I will try to keep my fork continually updated with the main repo, but since it's the main branch I use, I may update it with other features occasionally as well, since I'm lazy and don't feel like keeping the different features on separate branches and then merging them into a single production branch)

edit: oh and I forgot to mention that I made it so that each @<command>:path/to/file/ has to end with a / because having a delimiter meant that I could have file-names with spaces. I'm not sure if every valid file-name on windows or linux is supported, so ymmv, but in general this is-a_filename.org is valid for the commands, and that includes intermediary directory path names.

@qaptoR
Copy link

qaptoR commented Sep 12, 2024

I should also note that this pr is adding 'macro' support, which looks like is going to be an extendable feature where we can define new macros easily, and one of the first macros being developed for the system is @context_file or similar which will act like the @code command I mentioned above, and it should therefore be reasonable the rest of my commands will or could be implemented using the same macro system

#198

@Odie
Copy link
Contributor

Odie commented Sep 13, 2024

The other thing I did different was make it so that the the imported text always appears IN the message where the command is present...

In general this means that the conversation remains consistent for each successive query, and past contexts will always be available in future queries.

At least with programming, the file being discussed with the LLM and/or other related files might immediately have some changes applied to it as the result of the discussion. This means, if you're not saving and storing the referenced file away in a separate location (like the 'artifacts' directory discussed some time ago), attempts to recreate the exact state of the previous messages using the current state of the files will likely fail. The files could have changed drastically since the time the old message was issued, the file might have been moved, or even deleted altogether.

Robitx added a commit that referenced this issue Sep 19, 2024
- starting with @context_file (issue: #206)
@ekaj2
Copy link

ekaj2 commented Nov 14, 2024

@UreshiiPanda Just use GpChatPaste multiple times. Configure a good hotkey for it or set it up to select a file from Telescope, e.g.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants