Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What can we do about the over aggressive content filter at GitHub #711

Open
SamSaffron opened this issue Jan 9, 2025 · 3 comments
Open
Labels
enhancement New feature or request

Comments

@SamSaffron
Copy link

SamSaffron commented Jan 9, 2025

I am getting this way too much on Sonnet 3.5, so much that it is becoming unusable:

  choices = { {
      content_filter_offsets = {
        check_offset = 0,
        end_offset = 152,
        start_offset = 0
      },
      content_filter_results = {
        error = {
          code = "",
          message = ""
        },
        hate = {
          filtered = false,
          severity = "0"
        },
        self_harm = {
          filtered = false,
          severity = "0"
        },
        sexual = {
          filtered = false,
          severity = "0"
        },
        violence = {
          filtered = false,
          severity = "0"
        }
      },
      delta = {
        content = "d version:\n\n```javascript",
        copilot_annotations = {
          TextCopyright = { {
              citations = vim.empty_dict(),
              details = vim.empty_dict(),
              id = 0
            } }
        },
        role = "assistant"
      },
      finish_reason = "content_filter",
      index = 0
    } },
  created = 1736400745,
  id = "msg_bdrk_0115ZxJEq23SU48ZmC6mvKB1",
  model = "claude-3.5-sonnet",
  usage = {
    completion_tokens = 17,
    prompt_tokens = 3893,
    total_tokens = 3910
  }

I just discovered this is due to the suggestions matching public code here:

image

⏫ it was set to block

I guess the 2 possible action items here:

  1. Display a nice error message when we hit finish_reason of "content_filter"
  2. Add something to the readme about it
@SamSaffron
Copy link
Author

another interesting option might be support for BYO open ai compatible LLM?

@deathbeam deathbeam added the enhancement New feature or request label Jan 9, 2025
@deathbeam
Copy link
Collaborator

Well this is copilot plugin so we only support copilot stuff. But i agree that we should support better error message. Issue with that is that better error reporting is dependant on bug in plenary.nvim, i made PR for it but its still waiting open: nvim-lua/plenary.nvim#633

@SamSaffron
Copy link
Author

I am not sure there is a status issue here, status is 200, it is just once we see a chunk with:

finish_reason = "content_filter", then we know something bad happened and can report an error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants