-
Notifications
You must be signed in to change notification settings - Fork 171
Very slow process of compaction after index setup #390
Comments
What type of disks are you using? I alleviated similar compaction problems by switching to solid state drives. |
@FourSeventy unfortunately but it's not an IO bottleneck( |
unfortunately this makes lucene indexes totally unusable( may be i can do some kind of debug? btw it's ok, that MemtableFlushWriter spams log file in around 2 mins? when there is no reads/updates
|
okay what was wrong with geo_point? |
So what’s the Cassandra version and what’s the plugin version did we use to avoid compatibility issues? Any suggestions |
Good day
C* is 3.11; plugin according version. ubuntu 16.04, java 1.8 latest version
one DC, 3 nodes, keyspace with rf=3
at EC2 with 2 CPU and 4Gb memory each.
cluster works well, data inserted by batches each 15 mins, no problems with compactions and performance, datasize around 15M rows
but im facing with strange behavior after creating lucene index:
ive created index
index created and works well
on next day i see LA more than 3 (on each node), with queue of 8 compactions.
i was dropped index and all compactions where done in 15 mins.
ive recreated index and got same result on next day.
table simple as follows:
do i need update EC2 instance with more power? or i hit a bug?
The text was updated successfully, but these errors were encountered: