How to deal with data file corrupted situation #37497
-
One of an insert_log stored in S3 is corrupted, and couldn't recover (all of its blocks are gone) I observed that the compaction keeps compacting the corrupted insert_log, failed and re-downloaded looping endless.. Want to check what's the proper action to deal with these kinds of situation |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 11 replies
-
The path of the corrupted file is still recorded in etcd, so milvus tries to compact it. The is a developer tool birdwatch: https://github.com/milvus-io/birdwatcher. By this tool, you can do something hack to milvus.
|
Beta Was this translation helpful? Give feedback.
-
Thanks @yhmo @xiaofan-luan |
Beta Was this translation helpful? Give feedback.
-
Similar with this discussion: #37411 @Reidddddd I forgot to add "--" to "run". |
Beta Was this translation helpful? Give feedback.
The path of the corrupted file is still recorded in etcd, so milvus tries to compact it.
The is a developer tool birdwatch: https://github.com/milvus-io/birdwatcher. By this tool, you can do something hack to milvus.
connect --etcd [ip]:[port] --rootPath xxx
remove binlog --collectionID xxxxxx --partitionID xxxxxx --segmentID xxxxxx --fieldID xxx --logType statslog run
You can find the collectionID/p…