Skip to content

Commit

Permalink
Merge branch 'main' into wrj/detach-java-udf
Browse files Browse the repository at this point in the history
  • Loading branch information
wangrunji0408 authored Sep 12, 2023
2 parents 3eec78f + 8b61c92 commit 9f3a151
Show file tree
Hide file tree
Showing 62 changed files with 903 additions and 435 deletions.
598 changes: 290 additions & 308 deletions Cargo.lock

Large diffs are not rendered by default.

4 changes: 4 additions & 0 deletions ci/scripts/e2e-source-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,10 @@ chmod +x ./scripts/source/prepare_data_after_alter.sh
./scripts/source/prepare_data_after_alter.sh 2
sqllogictest -p 4566 -d dev './e2e_test/source/basic/alter/kafka_after_new_data.slt'

echo "--- e2e, kafka alter source again"
./scripts/source/prepare_data_after_alter.sh 3
sqllogictest -p 4566 -d dev './e2e_test/source/basic/alter/kafka_after_new_data_2.slt'

echo "--- Run CH-benCHmark"
./risedev slt -p 4566 -d dev './e2e_test/ch_benchmark/batch/ch_benchmark.slt'
./risedev slt -p 4566 -d dev './e2e_test/ch_benchmark/streaming/*.slt'
6 changes: 3 additions & 3 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ It will start a minio, a meta node, a compute node, a frontend, a compactor, a p
### s3 and other s3-compatible storage backend
To start a RisingWave cluster with s3 backend, configure the aws credit in [aws.env](https://github.com/risingwavelabs/risingwave/blob/main/docker/aws.env).
If you want to use some s3 compatible storage like Tencent Cloud COS, just configure one more [endpoint](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/aws.env#L7).
After configuring the environment and fill in your [bucket name and data directory](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-s3.yml#L196), run
After configuring the environment and fill in your [bucket name](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-s3.yml#L196), run

```
# Start all components
Expand All @@ -68,7 +68,7 @@ docker-compose -f docker-compose-with-s3.yml up
It will run with s3 (compatible) object storage with a meta node, a compute node, a frontend, a compactor, a prometheus and a redpanda instance.

### Start with other storage products of public cloud vendors
To start a RisingWave cluster with other storage backend, like Google Cloud Storage, Alicloud OSS or Azure Blob Storage, configure the authentication information in [multiple_object_storage.env](https://github.com/risingwavelabs/risingwave/blob/main/docker/multiple_object_storage.env), fill in your [bucket name and data directory](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-gcs.yml#L196).
To start a RisingWave cluster with other storage backend, like Google Cloud Storage, Alicloud OSS or Azure Blob Storage, configure the authentication information in [multiple_object_storage.env](https://github.com/risingwavelabs/risingwave/blob/main/docker/multiple_object_storage.env), fill in your [bucket name](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-gcs.yml#L196).
and run

```
Expand All @@ -79,7 +79,7 @@ docker-compose -f docker-compose-with-xxx.yml up
It will run RisingWave with corresponding (object) storage products.

### Start with HDFS backend
To start a RisingWave cluster with HDFS, mount your `HADDOP_HOME` in [compactor node volumes](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-hdfs.yml#L28), [compute node volumes](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-hdfs.yml#L112) [compute node volumes](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-hdfs.yml#L218), fill in the [cluster_name/namenode and data_path](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-hdfs.yml#L202),
To start a RisingWave cluster with HDFS, mount your `HADDOP_HOME` in [compactor node volumes](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-hdfs.yml#L28), [compute node volumes](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-hdfs.yml#L112) [compute node volumes](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-hdfs.yml#L218), fill in the [cluster_name/namenode](https://github.com/risingwavelabs/risingwave/blob/a2684461e379ce73f8d730982147439e2379de16/docker/docker-compose-with-hdfs.yml#L202),
and run

```
Expand Down
2 changes: 1 addition & 1 deletion docker/docker-compose-with-azblob.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ services:
- "--connector-rpc-endpoint"
- "connector-node:50051"
- "--state-store"
- "hummock+azblob://<container_name>@<root>"
- "hummock+azblob://<container_name>"
- "--data-directory"
- "hummock_001"
- "--config-path"
Expand Down
2 changes: 1 addition & 1 deletion docker/docker-compose-with-gcs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ services:
- "--connector-rpc-endpoint"
- "connector-node:50051"
- "--state-store"
- "hummock+gcs://<bucket-name>@<data_path>"
- "hummock+gcs://<bucket-name>"
- "--data-directory"
- "hummock_001"
- "--config-path"
Expand Down
2 changes: 1 addition & 1 deletion docker/docker-compose-with-hdfs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ services:
- "--connector-rpc-endpoint"
- "connector-node:50051"
- "--state-store"
- "hummock+hdfs://<cluster_name>@<data_path>"
- "hummock+hdfs://<cluster_name>"
- "--data-directory"
- "hummock_001"
- "--config-path"
Expand Down
2 changes: 1 addition & 1 deletion docker/docker-compose-with-oss.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ services:
- "--connector-rpc-endpoint"
- "connector-node:50051"
- "--state-store"
- "hummock+oss://<bucket_name>@<data_path>"
- "hummock+oss://<bucket_name>"
- "--data-directory"
- "hummock_001"
- "--config-path"
Expand Down
2 changes: 1 addition & 1 deletion docker/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
version: "3"
services:
compactor-0:
image: "ghcr.io/risingwavelabs/risingwave:${RW_IMAGE_VERSION:-v1.0.0}"
image: "ghcr.io/risingwavelabs/risingwave:${RW_IMAGE_VERSION:-v1.2.0}"
command:
- compactor-node
- "--listen-addr"
Expand Down
37 changes: 37 additions & 0 deletions e2e_test/batch/functions/array_sort.slt.part
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
query I
select array_sort(array[3, 2, 1]);
----
{1,2,3}

query I
select array_sort(array[3.14, 2.12, 1.14514]);
----
{1.14514,2.12,3.14}

query I
select array_sort(array['b', 'a', 'c']);
----
{a,b,c}

query I
select array_sort(array[-1000, 2000, 0]);
----
{-1000,0,2000}

query I
select array_sort(array['abcdef', 'aacedf', 'aaadef']);
----
{aaadef,aacedf,abcdef}

query I
select array_sort(array['114514🤔️1919810', '113514🥵1919810', '112514😅1919810']);
----
{112514😅1919810,113514🥵1919810,114514🤔️1919810}

query I
select array_sort(array[3, 2, NULL, 1, NULL]);
----
{1,2,3,NULL,NULL}

query error invalid digit found in string
select array_sort(array[3, 2, 1, 'a']);
8 changes: 8 additions & 0 deletions e2e_test/ddl/table/generated_columns.slt.part
Original file line number Diff line number Diff line change
Expand Up @@ -149,3 +149,11 @@ CREATE TABLE t (v INT, t timestamptz as now()) WITH (
datagen.rows.per.second='15',
datagen.split.num = '1'
) FORMAT PLAIN ENCODE JSON;

# create a table with impure generated column as pk.
statement error QueryError: Bind error: Generated columns should not be part of the primary key. Here column "v2" is defined as part of the primary key.
CREATE TABLE t (
v1 INT,
v2 timestamptz AS proctime(),
PRIMARY KEY (v1, v2)
);
28 changes: 25 additions & 3 deletions e2e_test/source/basic/alter/kafka.slt
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,22 @@ CREATE SOURCE s2 (v2 varchar) with (
scan.startup.mode = 'earliest'
) FORMAT PLAIN ENCODE JSON;

statement ok
CREATE TABLE t (v1 int) with (
connector = 'kafka',
topic = 'kafka_alter',
properties.bootstrap.server = 'message_queue:29092',
scan.startup.mode = 'earliest'
) FORMAT PLAIN ENCODE JSON;


statement ok
create materialized view mv1 as select * from s1;

statement ok
create materialized view mv2 as select * from s2;

sleep 10s
sleep 5s

statement ok
flush;
Expand All @@ -35,6 +44,11 @@ select * from s2;
----
11

query I
select * from t;
----
1

# alter source
statement ok
alter source s1 add column v2 varchar;
Expand All @@ -49,7 +63,10 @@ create materialized view mv3 as select * from s1;
statement ok
create materialized view mv4 as select * from s2;

sleep 10s
statement ok
alter table t add column v2 varchar;

sleep 5s

statement ok
flush;
Expand Down Expand Up @@ -84,14 +101,19 @@ select * from mv4
----
11 NULL

query IT
select * from t
----
1 NULL

# alter source again
statement ok
alter source s1 add column v3 int;

statement ok
create materialized view mv5 as select * from s1;

sleep 10s
sleep 5s

statement ok
flush;
Expand Down
15 changes: 15 additions & 0 deletions e2e_test/source/basic/alter/kafka_after_new_data.slt
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,21 @@ select * from mv5
1 11 111
2 22 222

query IT rowsort
select * from t
----
1 NULL
2 22

statement ok
alter table t add column v3 int;

query IT rowsort
select * from t
----
1 NULL NULL
2 22 NULL

statement ok
drop materialized view mv1

Expand Down
14 changes: 14 additions & 0 deletions e2e_test/source/basic/alter/kafka_after_new_data_2.slt
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
sleep 5s

statement ok
flush;

query IT rowsort
select * from t
----
1 NULL NULL
2 22 NULL
3 33 333

statement ok
drop table t;
2 changes: 2 additions & 0 deletions proto/ddl_service.proto
Original file line number Diff line number Diff line change
Expand Up @@ -239,6 +239,8 @@ message ReplaceTablePlanRequest {
stream_plan.StreamFragmentGraph fragment_graph = 2;
// The mapping from the old columns to the new columns of the table.
catalog.ColIndexMapping table_col_index_mapping = 3;
// Source catalog of table's associated source
catalog.Source source = 4;
}

message ReplaceTablePlanResponse {
Expand Down
1 change: 1 addition & 0 deletions proto/expr.proto
Original file line number Diff line number Diff line change
Expand Up @@ -199,6 +199,7 @@ message ExprNode {
ARRAY_TRANSFORM = 545;
ARRAY_MIN = 546;
ARRAY_MAX = 547;
ARRAY_SORT = 549;

// Int256 functions
HEX_TO_INT256 = 560;
Expand Down
1 change: 1 addition & 0 deletions scripts/source/alter_data/kafka_alter.3
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"v1": 3, "v2": "33", "v3": 333}
2 changes: 1 addition & 1 deletion src/batch/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ rand = "0.8"
tempfile = "3"

[target.'cfg(unix)'.dev-dependencies]
tikv-jemallocator = { git = "https://github.com/yuhao-su/jemallocator.git", rev = "a0911601bb7bb263ca55c7ea161ef308fdc623f8" }
tikv-jemallocator = { git = "https://github.com/risingwavelabs/jemallocator.git", rev = "b7f9f3" }

[[bench]]
name = "filter"
Expand Down
2 changes: 1 addition & 1 deletion src/bench/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ tracing-subscriber = "0.3.17"
workspace-hack = { path = "../workspace-hack" }

[target.'cfg(target_os = "linux")'.dependencies]
nix = { version = "0.26", features = ["fs", "mman"] }
nix = { version = "0.27", features = ["fs", "mman"] }

[[bin]]
name = "s3-bench"
Expand Down
6 changes: 5 additions & 1 deletion src/cmd/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,11 @@ workspace-hack = { path = "../workspace-hack" }
task_stats_alloc = { path = "../utils/task_stats_alloc" }

[target.'cfg(unix)'.dependencies]
tikv-jemallocator = { git = "https://github.com/yuhao-su/jemallocator.git", features = ["profiling", "stats", "unprefixed_malloc_on_supported_platforms"], rev = "a0911601bb7bb263ca55c7ea161ef308fdc623f8" }
tikv-jemallocator = { git = "https://github.com/risingwavelabs/jemallocator.git", features = [
"profiling",
"stats",
"unprefixed_malloc_on_supported_platforms",
], rev = "b7f9f3" }

[[bin]]
name = "frontend"
Expand Down
6 changes: 5 additions & 1 deletion src/cmd_all/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,11 @@ vergen = { version = "8", default-features = false, features = ["build", "git",
task_stats_alloc = { path = "../utils/task_stats_alloc" }

[target.'cfg(unix)'.dependencies]
tikv-jemallocator = { git = "https://github.com/yuhao-su/jemallocator.git", features = ["profiling", "stats", "unprefixed_malloc_on_supported_platforms"], rev = "a0911601bb7bb263ca55c7ea161ef308fdc623f8" }
tikv-jemallocator = { git = "https://github.com/risingwavelabs/jemallocator.git", features = [
"profiling",
"stats",
"unprefixed_malloc_on_supported_platforms",
], rev = "b7f9f3" }

[[bin]]
name = "risingwave"
Expand Down
12 changes: 12 additions & 0 deletions src/common/src/catalog/column.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
use std::borrow::Cow;

use itertools::Itertools;
use risingwave_pb::expr::ExprNode;
use risingwave_pb::plan_common::column_desc::GeneratedOrDefaultColumn;
use risingwave_pb::plan_common::{PbColumnCatalog, PbColumnDesc};

Expand Down Expand Up @@ -282,6 +283,17 @@ impl ColumnCatalog {
self.column_desc.is_generated()
}

/// If the column is a generated column
pub fn generated_expr(&self) -> Option<&ExprNode> {
if let Some(GeneratedOrDefaultColumn::GeneratedColumn(desc)) =
&self.column_desc.generated_or_default_column
{
Some(desc.expr.as_ref().unwrap())
} else {
None
}
}

/// If the column is a column with default expr
pub fn is_default(&self) -> bool {
self.column_desc.is_default()
Expand Down
25 changes: 13 additions & 12 deletions src/common/src/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -370,7 +370,7 @@ pub struct ServerConfig {
pub unrecognized: Unrecognized<Self>,

/// Enable heap profile dump when memory usage is high.
#[serde(default = "default::server::auto_dump_heap_profile")]
#[serde(default)]
pub auto_dump_heap_profile: AutoDumpHeapProfileConfig,
}

Expand Down Expand Up @@ -658,18 +658,19 @@ impl AsyncStackTraceOption {

#[derive(Clone, Debug, Serialize, Deserialize, DefaultFromSerde)]
pub struct AutoDumpHeapProfileConfig {
/// Enable to auto dump heap profile when memory usage is high
#[serde(default = "default::auto_dump_heap_profile::enabled")]
pub enabled: bool,

/// The directory to dump heap profile. If empty, the prefix in `MALLOC_CONF` will be used
#[serde(default = "default::auto_dump_heap_profile::dir")]
pub dir: String,

/// The proportion (number between 0 and 1) of memory usage to trigger heap profile dump
#[serde(default = "default::auto_dump_heap_profile::threshold")]
pub threshold: f32,
}

impl AutoDumpHeapProfileConfig {
pub fn enabled(&self) -> bool {
!self.dir.is_empty()
}
}

serde_with::with_prefix!(streaming_prefix "stream_");
serde_with::with_prefix!(batch_prefix "batch_");

Expand Down Expand Up @@ -907,7 +908,7 @@ pub mod default {
}

pub mod server {
use crate::config::{AutoDumpHeapProfileConfig, MetricLevel};
use crate::config::MetricLevel;

pub fn heartbeat_interval_ms() -> u32 {
1000
Expand All @@ -924,10 +925,6 @@ pub mod default {
pub fn telemetry_enabled() -> bool {
true
}

pub fn auto_dump_heap_profile() -> AutoDumpHeapProfileConfig {
Default::default()
}
}

pub mod storage {
Expand Down Expand Up @@ -1129,6 +1126,10 @@ pub mod default {
}

pub mod auto_dump_heap_profile {
pub fn enabled() -> bool {
true
}

pub fn dir() -> String {
"".to_string()
}
Expand Down
Loading

0 comments on commit 9f3a151

Please sign in to comment.