-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add error_record_table #68
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: TennyZhuang <[email protected]>
0a184f2
to
4d45bf4
Compare
|
||
## Summary | ||
|
||
Our current streaming engine does not help users to discover, debug, and handle errors well. When user met an data record error, they can only find a log record like ``ExprError: Parse error: expected `,` or `]` at line 1 column 10 (ProjectExecutor: fragment_id=19007)``. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, currently we also have error reporting to Prometheus (risingwavelabs/risingwave#7824), but it do not contains original data (seemly for the same reason described in the Alternatives section)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neverchanje proposed similar ideas in risingwavelabs/risingwave#7803
But when it comes to production deployment, we will need to provide an option that allows users to persist lost messages somewhere (maybe in Hummock or a log store) so that they can search for what reason and what data were lost (maybe through ElasticSearch).
Closed as "we are now able to find the errors from Grafana", but we haven't supported "allow users to persist lost messages" yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. According to our discussion today, I feel that the major motivation of this RFC is to provide "dead letter table" per operator, so that user can find the raw records that caused the problem.
I totally agree with the idea but the approach seems to be too difficult for a common user, especially when they faces tens of tables and they can hardly tell which table to look into.
Signed-off-by: TennyZhuang <[email protected]>
|
||
1. `id bigint`: The ID can be generated by the similar method like `row_id` (vnode + local monotical ID). | ||
2. `error_reason varchar`: A human-readable error message. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we also have a column for the sql? Or store it in the error_reason
col?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO we may store error ID instead of varchar here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then store error_reasons in a system table.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we also have a column for the sql? Or store it in the error_reason col?
Store the mview ID.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO we may store error ID instead of varchar here?
Yes we can, but have to deal with backwards compat issues, in case users construct their own mapping logic using the error ids.
|
||
### Creating | ||
|
||
The ERTs are automatically created as internal tables when an operator is created. In most cases, an operator will have n ERTs, where n corresponds to the number of inputs it has. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So how can a user get all errors happened in a MV?
I think users are not supposed to under the distributed execution plan...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I think it sounds too heavy to keep one error table for each operator. I tend to keep one single error table for the whole database.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. This inspires me that it makes stateless operators like Project
and Filter
contain state tables as well. We even have to handle stuff related to distributed execution for them, like scaling or plan evolution. From this perspective, it seems too invasive to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. This inspires me that it makes stateless operators like Project and Filter contain state tables as well. We even have to handle stuff related to distributed execution for them, like scaling or plan evolution. From this perspective, it seems too invasive to me.
For scaling I think it should be less of a problem if we follow's Eric's approach of uuid + singleton distribution.
Edit: Sorry I realized that scale-out still need to create new state table, and scale in need to merge state table.
1. `id bigint`: The ID can be generated by the similar method like `row_id` (vnode + local monotical ID). | ||
2. `error_reason varchar`: A human-readable error message. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also error level: warn
/ error
/ critical
|
||
### Naming | ||
|
||
Same as other internal tables while suffixed by `error_{seq}`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the scope of ERT (one ERT per exec / per fragment)
|
||
### Creating | ||
|
||
The ERTs are automatically created as internal tables when an operator is created. In most cases, an operator will have n ERTs, where n corresponds to the number of inputs it has. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have any implementation details in this RFC? For example, how will this change the interface of expression evaluation? Will we have to pass the new StateTable
for error records everywhere?
|
||
### Data correction | ||
|
||
ERT could potentially be used to correct data, for example, users could clean up the data within ERT and then reimport it into the source. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The record in ERT could be an intermediate result of arbitrary operators in the plan, so it might be difficult for users to recognize or understand the values (unless he is an expert of RisingWave and knows the execution details). So I'm afraid users are not always able to operate on these "discarded" records, not to mention to correct them, except for some special ones like the errors for source parsing or type casting.
After some extra thinking, I would like to propose to use a single error_table rather than one error_table per operator. As mentioned in the RFC, our targets include
In short, turning error_table per operator into a single error_table will basically enhance 3 a lot and slightly impair 4. Additionally, it avoids introducing lots of error_tables, which will be confusing for most people that don't really care errors; and keeps backward compatibility. I'll explain these pros and cons one by one. "Enhance 3: Users can view the error records directly over psql" In the single-table approach, the users only need to query the Instead, in the error_table-per-operator fashion, we might need to provide an extra 'view' for that purpose. Note that this is not an actual 'view' because when a new materialized view appears, the view needs to be updated accordingly, which is somehow dirty to me. "Slightly impair 4: Users can reproduce the error easily by the similar SQL." In most cases, users don't need to really reproduce it in RisingWave (except it's RW's bug, but that's another story). For example, as a user, if I encountered a "JSON Parse" exception, It would be enough to tell me the record id and I can check the JSON in upstream by myself. Furthermore, I'll try to fix that data and optionally emit the record into Kafka again to let RisingWave consume it. Reproducing it in RisingWave side doesn't help me a lot. "It avoids introducing lots of error_tables" This is ugly both from user side and for us. From user side, our For us, we now assume a table is one of these 3 types: tables, materialized views and streaming states. The "Keeps backward compatibility" Because their will be only one error_table, we can simply hard-code this table in system catalog. That means that we don't need to change the plan node of existing fragments, limiting the changes only in operator's code implementation. Finally, since we have discussed that we will write into "error_record_tables" with blind write and singleton distribution (which means the writers aren't aware of any distribution, simply wirte with a random unique ID), both approach have similar complexity in terms of the core implementation (i.e. write-path). |
For 3. I didn't quite get it. Why don't we just
How do we synchronize access to the state table? Or we use uuid here to synchronize, instead of row_id? If yes that makes sense 👍 Also I'm assuming the error record will be stored as jsonb? Is our |
No, I'm not using MATERIALIZED VIEW. A "View" in database is defined by a SQL query:
If we want to query from all error tables, then the query would be like: /* note the columns are lost because they can't be merged into one result set */
select error_message from error_table_of_operator_a
union all
select error_message from error_table_of_operator_b
union all
select error_message from error_table_of_operator_c Furthermore, if materialized view is created or dropped, you need to update the view accordingly select error_message from error_table_of_operator_a
union all
select error_message from error_table_of_operator_b
union all
select error_message from error_table_of_operator_c
union all
select error_message from error_table_of_operator_d -- d is a new operator That's why I said it's kind of dirty - We need to hook the DDL processes. Or, alternatively, give up the SQL approach and provide a hard-code access way. This is also dirty because we need to construct such a query plan in hard-code way, instead of running normal optimizer process. 😇 |
Yes, uuid should be the most practical solution.
Yes, JSONB is one of the option. Technically there is no reason of data loss. Actually I don't really care whether the record is kept losslessly, I think it's okay to truncate records to limit the size, because the error is only supposed for human users to read.
True. |
Better name are welcome!