You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 14, 2023. It is now read-only.
Currently in datadog metrics I have some dashboards set up and its using the apollo.operations.error_count which works great for the most part but I've noticed its rolling up UserInputErrors into the count and its generating a lot of extra noise, I am using these metrics as SLO monitors and the UserInputErrors are skewing the percentages drastically.
I've seen some open issues from removing those things using dd-trace but that doesn't seem like what I need seeing as the data is being forwarded from ya'll. Any help / insight would be greatly appreciated!!
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Currently in datadog metrics I have some dashboards set up and its using the
apollo.operations.error_count
which works great for the most part but I've noticed its rolling up UserInputErrors into the count and its generating a lot of extra noise, I am using these metrics as SLO monitors and the UserInputErrors are skewing the percentages drastically.I've seen some open issues from removing those things using dd-trace but that doesn't seem like what I need seeing as the data is being forwarded from ya'll. Any help / insight would be greatly appreciated!!
The text was updated successfully, but these errors were encountered: