Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEAT]: Remove the [CONTEXT 0][CONTEXT 2][CONTEXT 5] information #2658

Open
morbificagent opened this issue Nov 20, 2024 · 6 comments
Open
Assignees
Labels
enhancement New feature or request feature request needs info / can't replicate Issues that require additional information and/or cannot currently be replicated, but possible bug

Comments

@morbificagent
Copy link

What would you like to see?

Hi together,
i am playing around with anythingllm connected to perplexity and its working good, but all responses are containing [CONTEXT 0][CONTEXT 2][CONTEXT 5] inside the text which makes it hard to read.

I have tried to filter it out by prompt but it seems to be impossible.

Would great if anyone has an idea how to filter it out in the response as it doesnt help anyone as nobody knows whats [CONTEXT 5]

@morbificagent morbificagent added enhancement New feature or request feature request labels Nov 20, 2024
@lewismacnow
Copy link
Contributor

lewismacnow commented Nov 20, 2024

What would you like to see?

responses are containing [CONTEXT 0][CONTEXT 2][CONTEXT 5] inside the text which makes it hard to read.

I have tried to filter it out by prompt but it seems to be impossible.
[CONTEXT 5]

You can usually work around this with a tweaked System Prompt:

"Given the following conversation, relevant context, and a follow up question, reply with an answer to the current question the user is asking. Return only your response to the question given the above information following the users instructions as needed and do not mention or refer to the "context" specifically in your response. You respond in plain English only as if the provided context is knowledge you are aware of and do not respond with JSON or tool calls, respond like a human speaking to another human."

Another good method is to create a "new" model, with some existing conversation already baked into the model file:
Example from Ollama

MESSAGE user Context: [CONTEXT 0]: Grass is green [END CONTEXT 0] [CONTEXT 1]: The Sky is blue [END CONTEXT 1] Is the Sky green?
MESSAGE assistant Last time I checked, the sky was blue. Grass, on the other hand, is green!

@timothycarambat
Copy link
Member

timothycarambat commented Nov 20, 2024

i am playing around with anythingllm connected to perplexity and its working good

How are you using AnythingLLM? The context is in the system prompt so that shouldn't be showing in an LLM response, and if it does that is just a model behavior. I haven't had it return context snippets in the response before

@timothycarambat timothycarambat added the needs info / can't replicate Issues that require additional information and/or cannot currently be replicated, but possible bug label Nov 20, 2024
@morbificagent
Copy link
Author

Hi together and thanks for the answers,

i have read a little bit and it looks like perplexity is delivering these [context x] "informations" inside the streamed text response.
And saying "dont deliver this..." in the prompt doesnt help.

Here are the tokens that are coming from perplexity:

[backend] info: Original Token: -Wissen
[backend] info: Original Token: darst
[backend] info: Original Token: ellt.[CONTEXT
[backend] info: Original Token: 1][CONTEXT
[backend] info: Original Token: 31][CONTEXT
[backend] info: Original Token: 54]

So i think there isnt a good way to filter them out.

@timothycarambat
Copy link
Member

Ah, seems like a model behavior. I wonder if swapping that to XML would help. Lots of training data is XML/HTML so it may be better are stripping that out of the response. What perplexity model are you using so i can try to repro?

@morbificagent
Copy link
Author

image
Its this one...

And i dont know why but sometimes single words in a sentence are in different languages when using this model...

@timothycarambat timothycarambat self-assigned this Nov 22, 2024
@morbificagent
Copy link
Author

any news about this? is it fixable or do we have to live with it because its a problem only on perplexity-side?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature request needs info / can't replicate Issues that require additional information and/or cannot currently be replicated, but possible bug
Projects
None yet
Development

No branches or pull requests

3 participants