You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SELECT id, name, schema_id FROM rw_catalog.rw_tables WHERE name = 'macros';
id | name | schema_id
------+--------+-----------
4815 | macros | 4028
select * from rw_depend where refobjid = 4815;
objid | refobjid
-------+----------
(0 rows)
drop table macros;
ERROR: Failed to run the query
Caused by:
Permission denied: PermissionDenied: Fail to delete table `macros` because 2356 other relation(s) depend on it
rw_depend only contains records for relations that have already been successfully created, and the data comes from dependent_relations in the proto.
The dependency check uses the in-memory relation_ref_count, but the error log shows that the reference count of macros is at 2356, which is clearly dirty. But why is it so large? 🤔 There are indeed many downstream fragments.
This indicates that table macros indeed has some creating downstream streaming jobs, after the user executes drop cascade, both the macros and the downstream relations' catalog are deleted. The table fragments of macros have also been deleted, but the actors of the downstream jobs were not clear during recovery, resulting in a constant recovery loop error reporting "actor xxxx not found in info table". This is likely related to the cache of in_progress_creation_streaming_job not being updated. The successful recovery after restarting meta can corroborate this, as actors downstream should have been cleaned after the restart.
This issue has been open for 60 days with no activity.
If you think it is still relevant today, and needs to be done in the near future, you can comment to update the status, or just manually remove the no-issue-activity label.
You can also confidently close this issue as not planned to keep our backlog clean.
Don't worry if you think the issue is still valuable to continue in the future.
It's searchable and can be reopened when it's time. 😄
Describe the bug
https://risingwave-labs.slack.com/archives/C06K0R9UTQR/p1730782295995279?thread_ts=1730737567.989079&cid=C06K0R9UTQR
Error message/log
No response
To Reproduce
No response
Expected behavior
No response
How did you deploy RisingWave?
Cloud
The version of RisingWave
v1.10.2-patch-us-east-2-672-agg-state-cache-metric
Additional context
No response
The text was updated successfully, but these errors were encountered: