-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed graphql query reported as successful #629
Comments
related bug: graphprotocol/graph-node#4488 |
The absence of logs can be a pain point, but I wouldn't recommend introducing a deserialization step to the indexer-service query path, which should be as lightweight as possible. Doing so would also harmfully impact indexer stats for query response time. I believe Graph-node would be the ideal component to address this issue as it is the one that populates the I've left a comment in the linked Graph Node issue that might help alleviating this issue. |
@madumas Is logging your only concern here? Or do you believe the indexer service should behave differently over failed queries? |
The goal here is to identify errors that may be linked to a problem in the indexer infrastructure that are actionable. Ideally graph-node would "know" what are those and I like your suggestion of having it flagging the errors, maybe different codes might be useful. The reason why the PR throws on the errors is to trigger an |
Our indexer was returning errors for some networks:
http post http://host1/subgraphs/id/QmQohvyxLZLpwouzPF61kTzmLjgawmt6qDWZrPndz1M9EQ Host:graph-mainnet.ellipfra.net Authorization:'XXXXX' query='query MyQuery { _meta { block { number } } }'
HTTP/1.1 200 OK { "graphQLResponse": "{\"errors\":[{\"message\":\"Store error: database unavailable\"}]}" }
The graph-service logs are not hinting of any problem:
{"level":30,"time":1679489726789,"pid":1,"hostname":"indexer-service-6fc7bb878c-ljnmg","name":"IndexerService","indexer":"0x62A0BD1d110FF4E5b793119e95Fc07C9d1Fc8c4a","operator":"0x4ecB19A2aC49C5DecFa5E65B6669C7e7fab5da9D","indexer":"0x62A0BD1d110FF4E5b793119e95Fc07C9d1Fc8c4a","operator":"0x4ecB19A2aC49C5DecFa5E65B6669C7e7fab5da9D","component":"Server","deployment":{"bytes32":"0x24a5b2e65c85a2debf7d9dd783f9c0bc1df52620039e94a2d60cee560834f969","ipfsHash":"QmQohvyxLZLpwouzPF61kTzmLjgawmt6qDWZrPndz1M9EQ"},"msg":"Received free query"} {"level":20,"time":1679489726795,"pid":1,"hostname":"indexer-service-6fc7bb878c-ljnmg","name":"IndexerService","indexer":"0x62A0BD1d110FF4E5b793119e95Fc07C9d1Fc8c4a","operator":"0x4ecB19A2aC49C5DecFa5E65B6669C7e7fab5da9D","indexer":"0x62A0BD1d110FF4E5b793119e95Fc07C9d1Fc8c4a","operator":"0x4ecB19A2aC49C5DecFa5E65B6669C7e7fab5da9D","component":"Server","msg":"POST /subgraphs/id/QmQohvyxLZLpwouzPF61kTzmLjgawmt6qDWZrPndz1M9EQ 200 88 - 6.101 ms"}
The prometheus endpoint is reporting that the query is successful.
graph-indexer v0.20.11
This lack of error reporting is preventing indexers from detecting production issues, as in this situation the problem was only apparent from the gateways.
The text was updated successfully, but these errors were encountered: