-
I've started GreptimeDB in the standalone mode, connected it with MySQL client, created a database named 'test', and created a table using following statements: Then I used a sql script to import data into 'test_table' through MySQL client. The insert sql is just like the following statement: The number of rows of data in the table 'test_table' is about seventeen million. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Did you build GreptimeDB in release mode? i.e. I've done some similar benchmarks before, and saw the queries were indeed been executed slowly under debug build. However, performance boosted greatly when I switched to release build. |
Beta Was this translation helpful? Give feedback.
-
Even using the release build, there are still many bottlenecks in the system. Our default memtable size is quite small so there might be too many small files to scan and merge. We haven't implemented compaction yet, but we are working on it, in #930 . Unfortunately, the memtable size is unconfigurable now. The file scan performance is also possible to improve. I think we could address some performance issues in the 0.2 release. |
Beta Was this translation helpful? Give feedback.
Even using the release build, there are still many bottlenecks in the system. Our default memtable size is quite small so there might be too many small files to scan and merge. We haven't implemented compaction yet, but we are working on it, in #930 . Unfortunately, the memtable size is unconfigurable now. The file scan performance is also possible to improve. I think we could address some performance issues in the 0.2 release.